• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 2543
  • Last Modified:

Adding fields to physical file

Hello experts -

I am just curious if it is possible or not to add a field to a physical file on the iseries where you don't have to recompile every program, etc that uses it.  I was thinking something along the lines of how you can create a logical file to make a keyed value on a physical file.  

If there is a way to do this somewhat easily, it would be critical that I can implement it as it would save a ton of time between either a.) extra code or b.) having to recompile every program.

thanks for all of your help.
0
mossman242
Asked:
mossman242
  • 9
  • 7
  • 4
  • +2
2 Solutions
 
tliottaCommented:
mossman242:

Yes. But before jumping in, please review this thread -- http://www.experts-exchange.com/Q_21188211.html

The rest of this assumes native I/O rather than SQL. If you're using SQL primarily, it doesn't matter much because it happens more or less automatically.

Without SQL --

Logical Files (views) should be created with explicit field lists rather than with an implicit list of all available fields. This causes a specific format-level identifier to be associated with the record format of the LF. This identifier will remain the same regardless of new fields being added to the underlying PF and will remain even if the LF is recompiled.

Because of this, you can add new fields to a PF and no program recompiles are needed.

Of course, the new fields won't be loaded into memory when the programs are run; but that shouldn't matter. When you make code changes that refer to new fields, you'll go through a recompilation step for the program anyway.

And of course, any programs that need the new fields will need to reference a LF that includes those fields. The LF can be an existing one that has had the field added to the list, in which case all programs that use that LF must also be recompiled. Or it can be a new LF that is cloned from an existing one and had the new field(s) added.

In the short run, creating a new LF that is based on a previous LF doesn't add significantly to the system's overhead. Presumeably the keys will be the same for old and new LF so only one (shared) access path will exist and need to be maintained. Since it already exists and is already maintained, a second view only adds a few KB for the LF definition; it doesn't need to allocate any space for the index.

In the long run, if changes happen frequently, you can end up with too many LFs to make sense of. But who can say if it's better to have excess LFs or to use LFs that retrieve more fields than are needed in a given situation?

In short, if you access by explicit LFs, you're well on your way to what you're thinking.

Lots more discussion is possible from various viewpoints.

Tom
0
 
ShadowProgrammerCommented:
I must admit I can see some benefits from these explicit logicals, but I don't conside the recompilation to be a big problem.

The whole point of the system38 (not sure about 36 - never used one) through to the iSeries was that the database is secure and solid with built in safety ie. level checks etc., and is difficult to corrupt.  It is only as IBM have been forced to open up the system to compete and offer similar functionality as other systems that they are offering people the option of currupting their database.

For this level of security/safety I am prepared to recompile ALL the programs using a file.  It is almost irrelevant whether you recompile 50 programs using the PF and 5 x LFs or 500 programs using PF and ALL LF's. I accept it takes longer but
I would also hope that adding fields to files is not an every day occurence and should be considered a medium/major change to any system.

Programs which don't use the new field ONLY need recompiling even when the field is inserted into the middle of the record - that is a major plus that I think some midrange people have just come accustomed to over the years and don't appreciate it for what it is.
Think back to the days when programmers had to change ALL the programs to cater for this new field, because the database is defined internally.

End of Speech - I am now getting of my Soap Box.

Tony.


0
 
mossman242Author Commented:
Hi tiliotta -

Thanks for your comment.  I have only been programming on the AS/400 for a little over 1 year, and during this time I have only worked with implicit logical files.  If you have a moment, would you be able to show me an example of how an explicit lf would be set up?

I agree with Shadow's comments that adding the field to the pf and then recompiling every program would probably be the best way long term to go, but with 1500+ programs spread 1 physical file and 5 other logicals, that would be very time consuming.

Any additional help you could provide would be much appreciated!
0
Granular recovery for Microsoft Exchange

With Veeam Explorer for Microsoft Exchange you can choose the Exchange Servers and restore points you’re interested in, and Veeam Explorer will present the contents of those mailbox stores for browsing, searching and exporting.

 
tliottaCommented:
mossman242:

Explicit field lists simply mean that the fields are specified in the DDS. An implicit list is when no fields are listed so that all PF fields are included by default. An explicit list might look like:

   A          R TESTPFF                   PFILE(mylib/TESTPF)
   A*
   A            FLD02
   A            FLD01
   A*
   A          K FLD01

This LF includes two fields from the PF in a specified order. FLD02 is first in the buffer, and FLD01 is last. The LF also happens to be indexed by FLD01.

   A          R TESTPFF                   PFILE(mylib/TESTPF)
   A*
   A          K FLD01

This LF implicitly includes _all_ fields from the PF. Further, the buffer layout matches the layout of the PF and none of the field attributes are overridden. Therefore, the record format matches the PF and is dependent on the PF.

The explicit LF can be independent of the PF layout. In fact, via CHGPF, it can be easily possible to add fields (and delete them if they are not referenced by the LF) to the PF and not have to recompile anything.

A major point of external definitions is that external elements, such as a database, can be changed without affecting other objects. Change the database? No problem; the programs can run with no errors.

And there's no need to give up the protection of attributes such as level checks. The protection still exists -- there simply are no level checks if done right.

No need to recompile anything. No need to manage authorities on recompiled objects especially when deploying to production. No need to audit change/creation dates on objects for auditors. Etc., etc.

Tom
0
 
daveslaterCommented:
Hi

Tom's solution is a very cleaver get around but I am with ShadowProgrammer. Database level checks are there for a reason.

The majority of systems have been developed with-out explicit field lists in the logical files and therefore the amount of effort in re-compiling your database will far outweigh the actual re-compilation of the programs.

It is quite easy using the DSPPGMREF command to list create a program that will re-compile all programs that are based on a file (there are probably many commercial products out there to do the job).

My preferred method is to create the new file, use a utility to re-compile all programs into a new library, test the programs for level checks, get a restricted system and spend maybe an hour moving the objects into production after performing a CHGPF.

Experience tells us all the 1 hour  saved today can cost several days or weeks later down the line.

Dave
0
 
daveslaterCommented:
Ps
the other option is to create an extension file then use a joined logical to allow retrieval of the information.

The only programs that would need to access the files independently are the update programs (joined logicals can not be updated)

Dave
0
 
tliottaCommented:
An "extension file" is a good example of what a relational database can be good for. The database can be extended without making any changes to existing tables. The extension table is simply joined in or accessed by whatever keys define the "relation".

But note that my earlier discussion in no way circumvented database level-checks. My discussion pointed out that format level identifiers for the LFs didn't have to change and therefore there _was_ no level-check error.

This is practically similar to what happens in an SQL SELECT statement that explicitly lists the columns rather than implicitly includes all columns by using "*" as the column list. Level checks are far more rare in SQL applications because the database is accessed using techniques that significantly reduce the possibility of error from them.

Native DB2/400 provides almost the same capability. I've never seen any reason _not_ to take advantage of it.

Especially with auditing becoming far more of an issue in the U.S., it can be highly advantageous to use the capability that has always existed.

Tom
0
 
theo kouwenhovenCommented:
Hi mossman242,

Yes No problem, as long as you follow the next rules:

Add fields allways to the end of the record.
and set the LVLCHK parameter to *NO

You can even add this fields to the file without copying the data ( but an extra copy in case of .......)

by CHGPF FILE(MYLIB/MYFILE)          
      SRCFILE(MYSRCLIB/QDDSSRC)  

But be carefull, before you know you do an option 14 to create the file.  (I'm very sure about that, please don't ask why.....)

Good luck,
Murph
0
 
ShadowProgrammerCommented:
One thought about explicit logicals is that if you have not got them at the moment, you will need to change the files and recompile ALL the programs anyway (may be worth doing this time around - your choice).

I would STRONGLY advise against using level check *NO - your data could easily get corrupted in a big way.

Do you have 1500+ programs which reference this file or 1500+ programs in total ?

If 1500+ reference this file, it is quite a change and should be done over a weekend or evening which is adequate t recompile 1500+ programs.

As Dave suggested, in the abscence of any tools to do the job, it is quite easy to run a DSPPGMREF over all the program libraries into an outfile, this can then be used to identify the programs which reference the PF and its logicals.  You could write a CL program to recompile the programs required.

Let us know if you want a hand writing such a program.

Tony
0
 
theo kouwenhovenCommented:
Please ShadowProgrammer,

Explain me the big risk that I take if I set LVLCHK to *NO, I do it for years, this is the only way to make it possible to extend files without recompiling.
If you know an other way, please let me know.

Regards,
Murph
0
 
tliottaCommented:
Murph:

"Risk" is part of the security tradeoff. Security always involves tradeoffs -- benefit vs risk in terms of cost. And a huge part of the calculation involves detailed knowledge of the environment.

If you know the environment sufficiently well, you can make a well informed judgement. If you know the environment well enough, then lack of level checks can be very acceptable.

Some of the risk is that there _might_ be details you miss. Applications can have a very large number of details. It can be easy to forget that level checks have been turned off at some point buried in an application.

Next month, a database change might cause data in a buffer to move a value into the wrong field in a program's memory. Maybe a flag field ends up being switched off and a tax calculation ends up incorrect. Maybe a medical warning gets switched off. Maybe an incorrect part is inserted into a subassembly.

If the level check was turned on, the database itself would signal the program that there was a mismatch and the error would be caught. With no signal from the database, the error could go on for years and no one would know.

Who has legal responsibility? Perhaps you do.

Tom
0
 
theo kouwenhovenCommented:
Hello Tom,

Here a urgent reason to set the level check to *OFF.
Our documentation has to follow a lot of rules (demanded by the FDA). One of the rues is that for each changed or created program, a huge set of (unnecessary and ridiculous) documentation must be created, not use full but only cover-your-ass stuff. So if we will compile all programs that are using a changed PF, we have to document over 100 programs, in stead of only one or two. This will take month of extra documenting. So I Think this is a very good reason to choose for the *OFF and not for the recompile option. I don't like it, I know it can go wrong if you not exactly know what you are doing, but there is no other option.

Regards,
Murph

0
 
tliottaCommented:
Murph:

That's also a reason for using views (or LFs) instead of tables (PFs). The underlying table can be changed while the view remains the same. No need for recompiles, and level checks still work correctly.

Only when a particular program needs to be changed in order to process added columns would a recompile be needed. And since there would be a logic change in that case, there's no added documentation beyond what would be done anyway.

Tom
0
 
theo kouwenhovenCommented:
so If I repeat you are telling:
when I change a PF re create it (and have to delete and rebuild the LF's), ther is no levelcheck on the LF when I try to read or write, using that LF. even if the PF has changed end the LVLCHK = *YES ?????

0
 
tliottaCommented:
There are requirements.

Assuming that the LF lists its fields and the PF change doesn't change the LF buffer layout, yes, that's correct.

When the LF has an explicit field list, the format level doesn't change unless new fields are added to the LF _or_ the data definition of one or more fields actually changes the buffer layout.

If you change a field from length 10 to length 20, obviously the layout will change in the buffer passed to the program so the program must be recompiled to recognized the new layout. Likewise, if the order of the fields in the LF is changed, the buffer layout changes; but the LF order would be changed only if the LF source DDS was changed (or the LF had an implicit list).

If you change a field that isn't in the LF, then there is no LF format change. Hence, no format level ID change; hence, no level check. The LF defines the "view" of the data. If that "view" remains unchanged, then the program will never notice.

Tom
0
 
theo kouwenhovenCommented:
Hi Tom,

Yep, bu 99,99% of the LF contains the PF record format.
so that's not possible.

0
 
ShadowProgrammerCommented:
Sorry I haven't got back to you sooner murphey2....

Reason for NOT using level check(*NO) is that programs will read in the record in the format they "believe" the file to be in (which is the format when the program was compiled)...   depending on the change, it is possible for the program to then corrupt the data which would not be picked up until some time later (this could be days or months)... Lots of work involved in sorting that out - believe me because I have gone through it !!!!  Past experience tells me to not to use level check(NO) - I just try and pass on my experience.   If you have been using level check (*NO) for may years, it may be worth arranging to check your data. You may be ok if you have only been adding fields on the end of the format.

One thought about these explicit LFs, I assume that the LFs must be recompiled because you are deleting the PF when you recompile it to add the new field, so I assume that the level id is expected to be the same as before because as far as the LF is concerned, nothing has changed.

TOM - can you confirm that if I added a field to the PF in the middle of the record layout, any explicit LF without that new field would keep the previous level id.

Murphey2, part of this discussion is redundant at the moment because you don't have explicit LFs but it may be worth considering creating explicit LFs for future use.  So come what may, unless you create a seperate extension file you will have to recompile ALL the LFs and ALL the associated programs anyway (except if you do as you have in the past and use level check(*NO) !!)

I would have a serious word with whoever controls you standards of documentation as amending 100 individual program documents when all you are doing is recompiling them is a bit overboard.  [ You should be amending the documentation for the programs referencing this new field. ]  If you don't get any reasonable response from them, then get the business behind you by ensuring that you include documentation in your project plans and when someone asks why it will take so long, explain it to them so they can apply "political" pressure for you.

Tony.
0
 
theo kouwenhovenCommented:
Yep Tony that is why I wrote

rule 1. ONLY ADD FIELDS TO THE END OF THE FILE.

No other changes are allowed.
A programmer who will NOT follow this rule wil be executed within a second !!!!

For the documentation: If I have to recompile 100 programs, I have to add 100 Documents to explain that is was for a recompile only + 100 risc assesments + a 100 program move sheet + a signature on every document from the sofrware owner + a signature of QA.
(oke oke, some programs can be signed by group or by application, but I still need to write a lot, and need a lot of SIgnatures. I know it sonds really really stupid, but I can't change a thing, I'm only a contractor.
0
 
tliottaCommented:
Tony:

Keep in mind that an LF can explicitly select _and_ order fields in the buffer that it presents any way it wants. That is, the point of an LF is to provide a buffer layout regardless of the physical layout. (Of course, it can also provide an alternative ordering of rows based on different keys.)

An LF is essentially nothing more than a "view", a customized window into the columns of the table. (Plus the alternative index, yada-yada...) When compiled, it provides the 'program' that is called by DB2 to move the data from the DASD buffers to program memory.

As long as the selected columns and their LF definitions don't change, then there is no change to the logical record format. If you add a column to the PF but _don't_ add it to the LF, then the view is unchanged. It doesn't matter _where_ the column is in the PF. Beginning, middle, end... if it isn't listed in the LF, then it doesn't _exist_ in the LF.

An LF should only contain the columns intended for that particular view. By limiting the columns, data movement into the program buffers is also limited. This in itself is a performance enhancement.

Murph:

Understood about the 99.9% issue. That's fairly common for AS/400 apps because it's always easiest just to let the list of LF columns default to all columns.

I would probably start by opening an edit session on the LF source and a second session on the PF source. Then I'd simply copy/paste the whole list of columns from PF to LF. Obviously, this phase _must_ include all columns and they must be maintained in physical order. Later phases can refine that, but not in the beginning. (This will not include any LFs that already explicitly list columns -- the .1%.)

By grabbing all columns, the recompiled LF will technically end up being the exact same view as it started as. But it will have the future advantage of the explicit column list.

I might also start looking for LFs that are used in some programs just for 'existence' tests. There might be programs that CHAIN to a customer LF to verify that a CustomerNumber is valid. Why use a LF that includes every PF column if the only needed column is CustomerNumber?

I'd start creating LFs that could begin to be used for those existence tests even if they duplicate access paths from the original LF. (If they do duplicate, remember that they should also 'share' that access path. No additional space is taken on the system. No additional access path maintenance is done by the system. _AND_ the programs that use those new LFs for existence tests can be slightly better performers.)

Those LFs wouldn't have to be used immediately. The programs can be changed to reference them over time.

In general, the LF work can all be done without recompiling any programs. All you're doing is recreating LFs in the same technical format and creating new LFs that aren't yet used.

It's a beginning.

Tom
0
 
theo kouwenhovenCommented:
Hey Tom,

I love this, but what i already wrote, I'm a contracter, and the own programmers of this company be stucked ina fase between RPGIII on S/38 level. No develoment, not interested.

So they never will use this.
0
 
tliottaCommented:
Yep. And the market wonders why AS/400s are "legacy, old-fashioned, etc." Features that have been available from the beginning aren't used.

Tom
0
 
ShadowProgrammerCommented:
I think we are straying slightly from mossman242's question - who seems to be really quiet...  

mossman242....
you have some differing opinions, whilst I think we all agree there are two ways to add a field without the need to recompile ALL programs referencing the file, we don't agree on the safety aspect.

(a) Level Check (*NO) as long as you obey some strict rules   (although I am against this method, it is still a valid method)

or

(b) Use Explicit LFs, which in the long term would reduce the number of recompiles necessary in a lot of cases, but if you are starting from an existing system, then first time around ALL logicals will need to be recreated and therefore ALL programs would need to be recompiled...   ALSO a lot of analysis and testing involved if you wanted to minimise the number of fields defined in each logical or for all current logicals just define ALL PF fields, so that in future any new fields just need adding to the LFs as required.


Is this what you were looking for or something more specific... please join in the discussion.
0
 
tliottaCommented:
Tony:

The use of COBOL in itself shouldn't have been a performance issue. I use COBOL for a number of functions specifically for performance enhancement. Of the sites I've worked at since the AS/400 was first announced, the number of COBOL vs. RPG sites is exactly even. No difference in performance.

** Please not the the above relates to a comment that I have deleted so it may not make sence ! DaveSlater **

However, if this was essentially converting an _application_ that was designed/architected for a different platform, then I'd be surprised if there _wasn't_ a performance problem. I can imagine attempting to make numerous "features" work the same way they did elsewhere. Now _that_ can be a stupid idea. For example, converting from a platform that has no concept of subfiles and trying to emulate the processing without using subfiles could be a disaster. (Not to mention the later maintenance headaches for qualified AS/400 programmers trying to figure out "Why in the world are they doing it THIS way???")

Anyway, note that an initial recreation of explicit LFs need not require any recompiles of programs. If all that is being done is adding the same explicit list of columns (fields) as were implicitly included, then the programs will not level check.

Creating trivial DDS for a PF and LF to test this only takes a couple minutes. Create a PF with two or three fields with different attributes. Then create a LF that implicitly includes all fields and run DSPFFD against the LF. Save the format level identifier for comparison later. Then update the LF source to explicitly include all of the PF fields and run DSPFFD again. The format level should not change (if the LF order matches the PF order). Then update the LF again to switch the _order_ of the fields or to drop one of the PF fields or to add a derived field. Now DSPFFD _will_ report a format level change.

In short, the initial work involves _only_ LF recompiles if done correctly.

Tom
0
 
mossman242Author Commented:
Hello everyone -

I apologize for being somewhat quite on this issue.  AS seems to be happening lately, we had a more pressing project come up and I had to put this one on the back burner for a while.  

I agree with Shadows initial view point that the correct way would be to recompile each and every program.  However, in a time crunched environment its not an option at the moment.  

I think Tom has some pretty good ideas on the relationships between the logical and the physical and how the explicit logical can be made to "modify" the physical.  

I enjoyed reading the posts that were made and have learned quite a bit from this discussion.  If anyone would like to further it at all, I am back and would love to continue on with it.  

Thanks,

Moss
0

Featured Post

Free Tool: Site Down Detector

Helpful to verify reports of your own downtime, or to double check a downed website you are trying to access.

One of a set of tools we are providing to everyone as a way of saying thank you for being a part of the community.

  • 9
  • 7
  • 4
  • +2
Tackle projects and never again get stuck behind a technical roadblock.
Join Now