Link to home
Start Free TrialLog in
Avatar of mossman242
mossman242

asked on

Adding fields to physical file

Hello experts -

I am just curious if it is possible or not to add a field to a physical file on the iseries where you don't have to recompile every program, etc that uses it.  I was thinking something along the lines of how you can create a logical file to make a keyed value on a physical file.  

If there is a way to do this somewhat easily, it would be critical that I can implement it as it would save a ton of time between either a.) extra code or b.) having to recompile every program.

thanks for all of your help.
ASKER CERTIFIED SOLUTION
Avatar of Member_2_276102
Member_2_276102

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of mossman242
mossman242

ASKER

Hi tiliotta -

Thanks for your comment.  I have only been programming on the AS/400 for a little over 1 year, and during this time I have only worked with implicit logical files.  If you have a moment, would you be able to show me an example of how an explicit lf would be set up?

I agree with Shadow's comments that adding the field to the pf and then recompiling every program would probably be the best way long term to go, but with 1500+ programs spread 1 physical file and 5 other logicals, that would be very time consuming.

Any additional help you could provide would be much appreciated!
mossman242:

Explicit field lists simply mean that the fields are specified in the DDS. An implicit list is when no fields are listed so that all PF fields are included by default. An explicit list might look like:

   A          R TESTPFF                   PFILE(mylib/TESTPF)
   A*
   A            FLD02
   A            FLD01
   A*
   A          K FLD01

This LF includes two fields from the PF in a specified order. FLD02 is first in the buffer, and FLD01 is last. The LF also happens to be indexed by FLD01.

   A          R TESTPFF                   PFILE(mylib/TESTPF)
   A*
   A          K FLD01

This LF implicitly includes _all_ fields from the PF. Further, the buffer layout matches the layout of the PF and none of the field attributes are overridden. Therefore, the record format matches the PF and is dependent on the PF.

The explicit LF can be independent of the PF layout. In fact, via CHGPF, it can be easily possible to add fields (and delete them if they are not referenced by the LF) to the PF and not have to recompile anything.

A major point of external definitions is that external elements, such as a database, can be changed without affecting other objects. Change the database? No problem; the programs can run with no errors.

And there's no need to give up the protection of attributes such as level checks. The protection still exists -- there simply are no level checks if done right.

No need to recompile anything. No need to manage authorities on recompiled objects especially when deploying to production. No need to audit change/creation dates on objects for auditors. Etc., etc.

Tom
Hi

Tom's solution is a very cleaver get around but I am with ShadowProgrammer. Database level checks are there for a reason.

The majority of systems have been developed with-out explicit field lists in the logical files and therefore the amount of effort in re-compiling your database will far outweigh the actual re-compilation of the programs.

It is quite easy using the DSPPGMREF command to list create a program that will re-compile all programs that are based on a file (there are probably many commercial products out there to do the job).

My preferred method is to create the new file, use a utility to re-compile all programs into a new library, test the programs for level checks, get a restricted system and spend maybe an hour moving the objects into production after performing a CHGPF.

Experience tells us all the 1 hour  saved today can cost several days or weeks later down the line.

Dave
Ps
the other option is to create an extension file then use a joined logical to allow retrieval of the information.

The only programs that would need to access the files independently are the update programs (joined logicals can not be updated)

Dave
An "extension file" is a good example of what a relational database can be good for. The database can be extended without making any changes to existing tables. The extension table is simply joined in or accessed by whatever keys define the "relation".

But note that my earlier discussion in no way circumvented database level-checks. My discussion pointed out that format level identifiers for the LFs didn't have to change and therefore there _was_ no level-check error.

This is practically similar to what happens in an SQL SELECT statement that explicitly lists the columns rather than implicitly includes all columns by using "*" as the column list. Level checks are far more rare in SQL applications because the database is accessed using techniques that significantly reduce the possibility of error from them.

Native DB2/400 provides almost the same capability. I've never seen any reason _not_ to take advantage of it.

Especially with auditing becoming far more of an issue in the U.S., it can be highly advantageous to use the capability that has always existed.

Tom
Avatar of Theo Kouwenhoven
Hi mossman242,

Yes No problem, as long as you follow the next rules:

Add fields allways to the end of the record.
and set the LVLCHK parameter to *NO

You can even add this fields to the file without copying the data ( but an extra copy in case of .......)

by CHGPF FILE(MYLIB/MYFILE)          
      SRCFILE(MYSRCLIB/QDDSSRC)  

But be carefull, before you know you do an option 14 to create the file.  (I'm very sure about that, please don't ask why.....)

Good luck,
Murph
One thought about explicit logicals is that if you have not got them at the moment, you will need to change the files and recompile ALL the programs anyway (may be worth doing this time around - your choice).

I would STRONGLY advise against using level check *NO - your data could easily get corrupted in a big way.

Do you have 1500+ programs which reference this file or 1500+ programs in total ?

If 1500+ reference this file, it is quite a change and should be done over a weekend or evening which is adequate t recompile 1500+ programs.

As Dave suggested, in the abscence of any tools to do the job, it is quite easy to run a DSPPGMREF over all the program libraries into an outfile, this can then be used to identify the programs which reference the PF and its logicals.  You could write a CL program to recompile the programs required.

Let us know if you want a hand writing such a program.

Tony
Please ShadowProgrammer,

Explain me the big risk that I take if I set LVLCHK to *NO, I do it for years, this is the only way to make it possible to extend files without recompiling.
If you know an other way, please let me know.

Regards,
Murph
Murph:

"Risk" is part of the security tradeoff. Security always involves tradeoffs -- benefit vs risk in terms of cost. And a huge part of the calculation involves detailed knowledge of the environment.

If you know the environment sufficiently well, you can make a well informed judgement. If you know the environment well enough, then lack of level checks can be very acceptable.

Some of the risk is that there _might_ be details you miss. Applications can have a very large number of details. It can be easy to forget that level checks have been turned off at some point buried in an application.

Next month, a database change might cause data in a buffer to move a value into the wrong field in a program's memory. Maybe a flag field ends up being switched off and a tax calculation ends up incorrect. Maybe a medical warning gets switched off. Maybe an incorrect part is inserted into a subassembly.

If the level check was turned on, the database itself would signal the program that there was a mismatch and the error would be caught. With no signal from the database, the error could go on for years and no one would know.

Who has legal responsibility? Perhaps you do.

Tom
Hello Tom,

Here a urgent reason to set the level check to *OFF.
Our documentation has to follow a lot of rules (demanded by the FDA). One of the rues is that for each changed or created program, a huge set of (unnecessary and ridiculous) documentation must be created, not use full but only cover-your-ass stuff. So if we will compile all programs that are using a changed PF, we have to document over 100 programs, in stead of only one or two. This will take month of extra documenting. So I Think this is a very good reason to choose for the *OFF and not for the recompile option. I don't like it, I know it can go wrong if you not exactly know what you are doing, but there is no other option.

Regards,
Murph

Murph:

That's also a reason for using views (or LFs) instead of tables (PFs). The underlying table can be changed while the view remains the same. No need for recompiles, and level checks still work correctly.

Only when a particular program needs to be changed in order to process added columns would a recompile be needed. And since there would be a logic change in that case, there's no added documentation beyond what would be done anyway.

Tom
so If I repeat you are telling:
when I change a PF re create it (and have to delete and rebuild the LF's), ther is no levelcheck on the LF when I try to read or write, using that LF. even if the PF has changed end the LVLCHK = *YES ?????

There are requirements.

Assuming that the LF lists its fields and the PF change doesn't change the LF buffer layout, yes, that's correct.

When the LF has an explicit field list, the format level doesn't change unless new fields are added to the LF _or_ the data definition of one or more fields actually changes the buffer layout.

If you change a field from length 10 to length 20, obviously the layout will change in the buffer passed to the program so the program must be recompiled to recognized the new layout. Likewise, if the order of the fields in the LF is changed, the buffer layout changes; but the LF order would be changed only if the LF source DDS was changed (or the LF had an implicit list).

If you change a field that isn't in the LF, then there is no LF format change. Hence, no format level ID change; hence, no level check. The LF defines the "view" of the data. If that "view" remains unchanged, then the program will never notice.

Tom
Hi Tom,

Yep, bu 99,99% of the LF contains the PF record format.
so that's not possible.

Sorry I haven't got back to you sooner murphey2....

Reason for NOT using level check(*NO) is that programs will read in the record in the format they "believe" the file to be in (which is the format when the program was compiled)...   depending on the change, it is possible for the program to then corrupt the data which would not be picked up until some time later (this could be days or months)... Lots of work involved in sorting that out - believe me because I have gone through it !!!!  Past experience tells me to not to use level check(NO) - I just try and pass on my experience.   If you have been using level check (*NO) for may years, it may be worth arranging to check your data. You may be ok if you have only been adding fields on the end of the format.

One thought about these explicit LFs, I assume that the LFs must be recompiled because you are deleting the PF when you recompile it to add the new field, so I assume that the level id is expected to be the same as before because as far as the LF is concerned, nothing has changed.

TOM - can you confirm that if I added a field to the PF in the middle of the record layout, any explicit LF without that new field would keep the previous level id.

Murphey2, part of this discussion is redundant at the moment because you don't have explicit LFs but it may be worth considering creating explicit LFs for future use.  So come what may, unless you create a seperate extension file you will have to recompile ALL the LFs and ALL the associated programs anyway (except if you do as you have in the past and use level check(*NO) !!)

I would have a serious word with whoever controls you standards of documentation as amending 100 individual program documents when all you are doing is recompiling them is a bit overboard.  [ You should be amending the documentation for the programs referencing this new field. ]  If you don't get any reasonable response from them, then get the business behind you by ensuring that you include documentation in your project plans and when someone asks why it will take so long, explain it to them so they can apply "political" pressure for you.

Tony.
Yep Tony that is why I wrote

rule 1. ONLY ADD FIELDS TO THE END OF THE FILE.

No other changes are allowed.
A programmer who will NOT follow this rule wil be executed within a second !!!!

For the documentation: If I have to recompile 100 programs, I have to add 100 Documents to explain that is was for a recompile only + 100 risc assesments + a 100 program move sheet + a signature on every document from the sofrware owner + a signature of QA.
(oke oke, some programs can be signed by group or by application, but I still need to write a lot, and need a lot of SIgnatures. I know it sonds really really stupid, but I can't change a thing, I'm only a contractor.
Tony:

Keep in mind that an LF can explicitly select _and_ order fields in the buffer that it presents any way it wants. That is, the point of an LF is to provide a buffer layout regardless of the physical layout. (Of course, it can also provide an alternative ordering of rows based on different keys.)

An LF is essentially nothing more than a "view", a customized window into the columns of the table. (Plus the alternative index, yada-yada...) When compiled, it provides the 'program' that is called by DB2 to move the data from the DASD buffers to program memory.

As long as the selected columns and their LF definitions don't change, then there is no change to the logical record format. If you add a column to the PF but _don't_ add it to the LF, then the view is unchanged. It doesn't matter _where_ the column is in the PF. Beginning, middle, end... if it isn't listed in the LF, then it doesn't _exist_ in the LF.

An LF should only contain the columns intended for that particular view. By limiting the columns, data movement into the program buffers is also limited. This in itself is a performance enhancement.

Murph:

Understood about the 99.9% issue. That's fairly common for AS/400 apps because it's always easiest just to let the list of LF columns default to all columns.

I would probably start by opening an edit session on the LF source and a second session on the PF source. Then I'd simply copy/paste the whole list of columns from PF to LF. Obviously, this phase _must_ include all columns and they must be maintained in physical order. Later phases can refine that, but not in the beginning. (This will not include any LFs that already explicitly list columns -- the .1%.)

By grabbing all columns, the recompiled LF will technically end up being the exact same view as it started as. But it will have the future advantage of the explicit column list.

I might also start looking for LFs that are used in some programs just for 'existence' tests. There might be programs that CHAIN to a customer LF to verify that a CustomerNumber is valid. Why use a LF that includes every PF column if the only needed column is CustomerNumber?

I'd start creating LFs that could begin to be used for those existence tests even if they duplicate access paths from the original LF. (If they do duplicate, remember that they should also 'share' that access path. No additional space is taken on the system. No additional access path maintenance is done by the system. _AND_ the programs that use those new LFs for existence tests can be slightly better performers.)

Those LFs wouldn't have to be used immediately. The programs can be changed to reference them over time.

In general, the LF work can all be done without recompiling any programs. All you're doing is recreating LFs in the same technical format and creating new LFs that aren't yet used.

It's a beginning.

Tom
Hey Tom,

I love this, but what i already wrote, I'm a contracter, and the own programmers of this company be stucked ina fase between RPGIII on S/38 level. No develoment, not interested.

So they never will use this.
Yep. And the market wonders why AS/400s are "legacy, old-fashioned, etc." Features that have been available from the beginning aren't used.

Tom
I think we are straying slightly from mossman242's question - who seems to be really quiet...  

mossman242....
you have some differing opinions, whilst I think we all agree there are two ways to add a field without the need to recompile ALL programs referencing the file, we don't agree on the safety aspect.

(a) Level Check (*NO) as long as you obey some strict rules   (although I am against this method, it is still a valid method)

or

(b) Use Explicit LFs, which in the long term would reduce the number of recompiles necessary in a lot of cases, but if you are starting from an existing system, then first time around ALL logicals will need to be recreated and therefore ALL programs would need to be recompiled...   ALSO a lot of analysis and testing involved if you wanted to minimise the number of fields defined in each logical or for all current logicals just define ALL PF fields, so that in future any new fields just need adding to the LFs as required.


Is this what you were looking for or something more specific... please join in the discussion.
Tony:

The use of COBOL in itself shouldn't have been a performance issue. I use COBOL for a number of functions specifically for performance enhancement. Of the sites I've worked at since the AS/400 was first announced, the number of COBOL vs. RPG sites is exactly even. No difference in performance.

** Please not the the above relates to a comment that I have deleted so it may not make sence ! DaveSlater **

However, if this was essentially converting an _application_ that was designed/architected for a different platform, then I'd be surprised if there _wasn't_ a performance problem. I can imagine attempting to make numerous "features" work the same way they did elsewhere. Now _that_ can be a stupid idea. For example, converting from a platform that has no concept of subfiles and trying to emulate the processing without using subfiles could be a disaster. (Not to mention the later maintenance headaches for qualified AS/400 programmers trying to figure out "Why in the world are they doing it THIS way???")

Anyway, note that an initial recreation of explicit LFs need not require any recompiles of programs. If all that is being done is adding the same explicit list of columns (fields) as were implicitly included, then the programs will not level check.

Creating trivial DDS for a PF and LF to test this only takes a couple minutes. Create a PF with two or three fields with different attributes. Then create a LF that implicitly includes all fields and run DSPFFD against the LF. Save the format level identifier for comparison later. Then update the LF source to explicitly include all of the PF fields and run DSPFFD again. The format level should not change (if the LF order matches the PF order). Then update the LF again to switch the _order_ of the fields or to drop one of the PF fields or to add a derived field. Now DSPFFD _will_ report a format level change.

In short, the initial work involves _only_ LF recompiles if done correctly.

Tom
Hello everyone -

I apologize for being somewhat quite on this issue.  AS seems to be happening lately, we had a more pressing project come up and I had to put this one on the back burner for a while.  

I agree with Shadows initial view point that the correct way would be to recompile each and every program.  However, in a time crunched environment its not an option at the moment.  

I think Tom has some pretty good ideas on the relationships between the logical and the physical and how the explicit logical can be made to "modify" the physical.  

I enjoyed reading the posts that were made and have learned quite a bit from this discussion.  If anyone would like to further it at all, I am back and would love to continue on with it.  

Thanks,

Moss