Discussion:
using call versus copy / include
(too old to reply)
henry weesie
2003-10-28 09:25:21 UTC
Permalink
We are using CobolII in a idms mainframe.
I'm told that using copy-members should generate less cpu than using
subprograms and therefore the use of copy-members should be preferred
above the use of subprograms.

So far my employer uses copy statements to copy source into each
program. However they are open for other structures like using
call-statements.

I expect more advantages of using calls instead of copy's in respect
of maintenance. No compile of all programs using copy when source in
the object is changed, no separate declarations en subroutines when
using calls.
But much more expensive CPU?

I'd like to have a strong case in trying to set up some form of object
oriented programming in Cobol II.

What are your experiences in the use of calls versus copy's?
Michael Mattias
2003-10-28 12:56:23 UTC
Permalink
Post by henry weesie
We are using CobolII in a idms mainframe.
I'm told that using copy-members should generate less cpu than using
subprograms and therefore the use of copy-members should be preferred
above the use of subprograms.
Yes, Copy + perform is more CPU than CALL <subprogram>, that's true..It's
also true that 72,128,875 is greater than 72,128,871. That is, the
difference is barely measurable, assuming you are not doing CALL+CANCEL,
CALL+CANCEL., CALL+CANCEL many many times in a single program run.
Post by henry weesie
I expect more advantages of using calls instead of copy's in respect
of maintenance. No compile of all programs using copy when source in
the object is changed, no separate declarations en subroutines when
using calls.
For these maintenance benefits alone when you have a common routine subject
to change, a dynamic CALL was, is and will continue to be my 'first choice.'
It's often not just recompiling, it's finding all the programs you must
recompile.

One other benefit.. if you do have a large procedure which is only called
once or twice per program, CALL+CANCEL frees the system resources used by
that called module when that module is no longer needed.

Bottom line, there is no one 'right' way to handle common code, it is
extremely application and environment specific; and since these methods are
NOT mutually exclusive, no reason you can't have one routine COPY'd (e.g.,
standard date arithmetic routines, which hardly ever change) and other
routines which do change periodically (e.g., pricing or commission
calculation routines) residing in a callable module.

--
Michael Mattias
Tal Systems, Inc.
Racine WI
***@talsystems.com
JerryMouse
2003-10-28 12:58:59 UTC
Permalink
Post by henry weesie
We are using CobolII in a idms mainframe.
I'm told that using copy-members should generate less cpu than using
subprograms and therefore the use of copy-members should be preferred
above the use of subprograms.
So far my employer uses copy statements to copy source into each
program. However they are open for other structures like using
call-statements.
I expect more advantages of using calls instead of copy's in respect
of maintenance. No compile of all programs using copy when source in
the object is changed, no separate declarations en subroutines when
using calls.
But much more expensive CPU?
I'd like to have a strong case in trying to set up some form of object
oriented programming in Cobol II.
What are your experiences in the use of calls versus copy's?
The main program has to branch to either the copied code or a subprogram, so
I can't see it makes much difference. The increase in Linker time (for a
subprogram) is offset by the increase in compile time (for a copy member).
The difference in execution time has got to be almost unmeasurably small.
Further, if you change something in a subordinate module, you have to
recompile all the target programs if using a copy vs re-link all the target
programs when using a subprogram.

As with virtually all attempts at micro-efficiency, you've already probably
wasted more time on the issue than you'll probably ever retrieve. But, what
the heck, how hard could it be to benchmark the sucker? Kludge up a
prototype test, loop it 1000 (or 1 million) times and clock it.

I, personally, prefer subprograms. It 'black-boxes' the code, gets the code
out of the way and allows me to concentrate on the main driving program,
prevents fiddling with the fenced-off code, eases maintainability, and
decreases bugs.
William M. Klein
2003-10-28 16:41:53 UTC
Permalink
Of course, as we all know <G>, IBM has dropped support for the VS COBOL II
compiler YEARS ago. Therefore, let me provide some information from the
CURRENT "IBM Enterprise COBOL" performance tuning paper:

"Performance considerations using DYNAM with CALL literal (measuring CALL
overhead only):

On the average, for a CALL intensive application, the overhead associated
with the CALL using

DYNAM ranged from 16% slower to 100% slower than NODYNAM.

Note: This test measured only the overhead of the CALL (i.e., the subprogram
did only a GOBACK);
thus, a full application that does more work in the subprograms is not
degraded as much."

but also

"Using CALLs

When using CALLs, be sure to consider using nested programs when possible.
The performance of a CALL to a nested program is faster than an external
static CALL; external dynamic calls are the slowest. CALL identifier is
slower than dynamic CALL literal. Additionally, you should consider space
management tuning (mentioned earlier in this paper) for all CALL intensive
applications.

With static CALLs, all programs are link-edited together, and hence, are
always in storage, even if you do not call them. However, there is only one
copy of the bootstrapping library routines link-edited with the application.

With dynamic CALLs, each subprogram is link-edited separately from the
others. They are brought into storage only if they are needed. However, each
subprogram has its own copy of the bootstrapping library routines
link-edited with it, bringing multiple copies of these routines in storage
as the application is executing.

Performance considerations for using CALLs (measuring CALL overhead only):

CALL to nested programs was 50% to 60% faster than static CALL.
Static CALL literal was 45% to 55% faster than dynamic CALL literal.
Static CALL literal was 60% to 65% faster than dynamic CALL identifier.
Dynamic CALL literal was 15% to 25% faster than dynamic CALL identifier.

Note: These tests measured only the overhead of the CALL (i.e., the
subprogram did only a GOBACK); thus, a full application that does more work
in the subprograms may have different results."

***

Note: NONE of this deals with the "ease" or "difficulty" in maintenance (and
re-testing). It should also be noted that STATIC calls require link-editing
in object code. Therefore, there is LIMITED advantages of static calls
(rather than dynamic calls) over NESTED programs and/or COPY procedures -
when it comes to maintenance issues.

***

For the full Performance Paper, see:

http://www.ibm.com/support/docview.wss?rs=203&q=7001475&uid=swg27001475
--
Bill Klein
wmklein <at> ix.netcom.com
Post by henry weesie
We are using CobolII in a idms mainframe.
I'm told that using copy-members should generate less cpu than using
subprograms and therefore the use of copy-members should be preferred
above the use of subprograms.
So far my employer uses copy statements to copy source into each
program. However they are open for other structures like using
call-statements.
I expect more advantages of using calls instead of copy's in respect
of maintenance. No compile of all programs using copy when source in
the object is changed, no separate declarations en subroutines when
using calls.
But much more expensive CPU?
I'd like to have a strong case in trying to set up some form of object
oriented programming in Cobol II.
What are your experiences in the use of calls versus copy's?
Binyamin Dissen
2003-10-28 17:11:16 UTC
Permalink
On Tue, 28 Oct 2003 16:41:53 GMT "William M. Klein"
<***@nospam.netcom.com> wrote:

:>Performance considerations for using CALLs (measuring CALL overhead only):

:> CALL to nested programs was 50% to 60% faster than static CALL.
:> Static CALL literal was 45% to 55% faster than dynamic CALL literal.
:> Static CALL literal was 60% to 65% faster than dynamic CALL identifier.
:> Dynamic CALL literal was 15% to 25% faster than dynamic CALL identifier.

I find it interesting that dynamic call literal is faster than dynamic call
identifier, especially on the initial call.

If the address is cached and reused (for subsequent calls) it should be about
as fast as static call literal.

Is there an explanation?

--
Binyamin Dissen <***@dissensoftware.com>
http://www.dissensoftware.com

Director, Dissen Software, Bar & Grill - Israel
William M. Klein
2003-10-28 17:14:26 UTC
Permalink
I don't know (and I don't think it is "documented") HOWEVER, it *might* be
that when you use the DYNAM compiler option, the run-time doesn't have to
"think about" which type of CALL each call is - while when you use NODYNAM
with CALL identifier, the run-time needs to "look at each" CALL to determine
whether it is dynamic or static.
--
Bill Klein
wmklein <at> ix.netcom.com
Post by Binyamin Dissen
On Tue, 28 Oct 2003 16:41:53 GMT "William M. Klein"
:> CALL to nested programs was 50% to 60% faster than static CALL.
:> Static CALL literal was 45% to 55% faster than dynamic CALL literal.
:> Static CALL literal was 60% to 65% faster than dynamic CALL identifier.
:> Dynamic CALL literal was 15% to 25% faster than dynamic CALL identifier.
I find it interesting that dynamic call literal is faster than dynamic call
identifier, especially on the initial call.
If the address is cached and reused (for subsequent calls) it should be about
as fast as static call literal.
Is there an explanation?
--
http://www.dissensoftware.com
Director, Dissen Software, Bar & Grill - Israel
Joe Zitzelberger
2003-10-29 12:45:32 UTC
Permalink
Post by Binyamin Dissen
On Tue, 28 Oct 2003 16:41:53 GMT "William M. Klein"
:> CALL to nested programs was 50% to 60% faster than static CALL.
:> Static CALL literal was 45% to 55% faster than dynamic CALL literal.
:> Static CALL literal was 60% to 65% faster than dynamic CALL identifier.
:> Dynamic CALL literal was 15% to 25% faster than dynamic CALL identifier.
I find it interesting that dynamic call literal is faster than dynamic call
identifier, especially on the initial call.
If the address is cached and reused (for subsequent calls) it should be about
as fast as static call literal.
Is there an explanation?
Dynamic call identifier has the additional overhead of checking to see
if the identifier value has changed and looking for a cached address of
an already loaded module to reuse.

That is such a small operation compared to the dynamic call literal,
which does not check the literal for change, but still has to resolve
the address of the module if previously used.

The fact that such a tiny operation is 15% to 25% of the entire call
ought to let us know that the concern about the time spent on any of it:
(4 * tiny) to (6 * tiny) is really not worth the concern.
LX-i
2003-10-29 00:59:29 UTC
Permalink
Post by henry weesie
We are using CobolII in a idms mainframe.
I'm told that using copy-members should generate less cpu than using
subprograms and therefore the use of copy-members should be preferred
above the use of subprograms.
So far my employer uses copy statements to copy source into each
program. However they are open for other structures like using
call-statements.
I expect more advantages of using calls instead of copy's in respect
of maintenance. No compile of all programs using copy when source in
the object is changed, no separate declarations en subroutines when
using calls.
But much more expensive CPU?
I'd like to have a strong case in trying to set up some form of object
oriented programming in Cobol II.
What are your experiences in the use of calls versus copy's?
I can't speak to the overhead on an IBM mainframe, but in a statically
linked executable, I would expect that a call might be a little more
intensive than a perform, since the former is resolved during linking
(when the object code is being tied together), while the latter is
resolved at compile time (when the object code is actually being
generated). You'll also have a bit of overhead for each chunk of memory
you're passing between the two. The downside to this is that you still
must re-link each executable when one of the subroutines changes.

That being said, I'm a big fan of calls instead of copybooks. They
better represent a componentized structure (or they can, if done right)
by breaking the system into smaller, less-complex pieces.

If there is a way you can utilize dynamic linking, this has the
advantage of not having to relink the executables; however, you then
incur the CPU overhead to resolve the call each time it's called during
each run. Unisys mainframes have a means where you can statically link
a vector to an installed "subsystem", which is loaded into memory the
first time it's called, and merely executed after that. This gives you
the best of both worlds - the speed of a static link, with the ability
to swap out modules out without having to re-link (provided the size and
number of the parameters do not change). I'm actually in the process of
trying to convert some of our most commonly-executed programs into a
scenario like this.
--
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~ / \ / ~ Live from Montgomery, AL! ~
~ / \/ o ~ ~
~ / /\ - | ~ ***@Netscape.net ~
~ _____ / \ | ~ http://www.knology.net/~mopsmom/daniel ~
~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~
~ I do not read e-mail at the above address ~
~ Please see website if you wish to contact me privately ~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Howard Brazee
2003-10-29 16:13:58 UTC
Permalink
Any difference in efficiency can be important if the code is performed a million
times. If it is used one time, then who cares about the overhead involved in
the call - other considerations are more important.

It is in between where judgment calls need to be made.
Chuck Stevens
2003-10-29 16:30:11 UTC
Permalink
... And what is efficient in one implementation may entail disastrously high
overhead in another. Know what's efficient and what's not on your
particular platform and your particular implementation of COBOL.

-Chuck Stevens
Post by Howard Brazee
Any difference in efficiency can be important if the code is performed a million
times. If it is used one time, then who cares about the overhead involved in
the call - other considerations are more important.
It is in between where judgment calls need to be made.
Peter E.C. Dashwood
2003-10-28 20:13:26 UTC
Permalink
Henry,

the posts here are offering good advice and you can see the consensus is in
favour of CALL.

If you intend to go to OO COBOL you will obtain all the advantages of a
dynamic calling environment without some of the disadvantages (see post from
Bill Klein).

Personally, if I couldn't use OO, I would (and have...) use a dynamic
calling environment. This provides the most flexibility and ease of
maintenance and (for me at least) the overheads have never been a problem.

(There is no need to even re-link applications when a module changes.
Instead you can spend the time on drawing up your regression testing
plan...<G>)

My own opinion on COPY (when used in the procedure division) is that it is
an abomination.

To me this is exactly the same as NOT using PERFORM for code that is used in
several places in a program.

There was a time (long, long, ago...) when it was necessary to do this
because there were few other options and a COPY ensured that common code
remained consistent across the programs that used it. In this day and age
that is just nonsense.

It also encourages the use of REPLACING to make it look like it was tailored
for a specific circumstance and is not REALLY just duplicate code....

A better solution is to write one set of generalised code and dynamically
call it.

Nested programs are a good solution if you are really worried about dynamic
CALL overheads.

(But you shouldn't be. Despite Bill's alarming 16 - 100% on CALL overhead,
you will never notice this. If something takes 10 milliseconds or 20, a
human certainly won't tell the difference. The only time when this MIGHT be
a consideration would be when processing in a batch environment, where you
are incurring the extra 10 milliseconds possibly millions of times. Usually
Batch jobs are running unattended and are expected to take a long time
anyway...)

Go with CALL. There is really NO argument to use Procedure includes/COPYs
any more.

The best solution of all is to use OO COBOL. You get the lot and you can
wrap your Classes into COM components and re-use them anywhere.

Pete.
Post by henry weesie
We are using CobolII in a idms mainframe.
I'm told that using copy-members should generate less cpu than using
subprograms and therefore the use of copy-members should be preferred
above the use of subprograms.
So far my employer uses copy statements to copy source into each
program. However they are open for other structures like using
call-statements.
I expect more advantages of using calls instead of copy's in respect
of maintenance. No compile of all programs using copy when source in
the object is changed, no separate declarations en subroutines when
using calls.
But much more expensive CPU?
I'd like to have a strong case in trying to set up some form of object
oriented programming in Cobol II.
What are your experiences in the use of calls versus copy's?
Donald Tees
2003-10-29 20:25:29 UTC
Permalink
Post by Peter E.C. Dashwood
Henry,
the posts here are offering good advice and you can see the consensus is in
favour of CALL.
If you intend to go to OO COBOL you will obtain all the advantages of a
dynamic calling environment without some of the disadvantages (see post from
Bill Klein).
Personally, if I couldn't use OO, I would (and have...) use a dynamic
calling environment. This provides the most flexibility and ease of
maintenance and (for me at least) the overheads have never been a problem.
(There is no need to even re-link applications when a module changes.
Instead you can spend the time on drawing up your regression testing
plan...<G>)
My own opinion on COPY (when used in the procedure division) is that it is
an abomination.
No, true abominations are copies in the procedure division using
REPLACING. I am working with code that uses the same code for all
files, and another 30 page chunk for each screen. It is copied
multitudinous times in each program, merrily changing all the paragraph
names, half the code, and most of the data names each time it is copied.
In many cases, a single word is replaced by an entire paragraph. The
copy statements are half a screen long each, and the resultant code is
enough to make you run screaming and crying back to unit record
equipment. May the programmer that wrote it rot in hell.

Donald
PS. To make it worse, they decided to be fiendishly clever, and each
field on each screen calls the entire screen routine over again
re-cursively, until the exit button is hit. Then it screams backwards
out of the recursion, executing about 4 million lines of code for the
single keystroke. I think it was probably written by a computer science
teacher. I have better than two million lines of source, of which I'll
bet 250,000 lines or so actually have anything at all to do with the
actual system logic.
Donald
Michael Mattias
2003-10-29 21:10:09 UTC
Permalink
Post by Peter E.C. Dashwood
(There is no need to even re-link applications when a module changes.
Instead you can spend the time on drawing up your regression testing
plan...<G>)
Or watching paint flake or grass grow.....


MCM
Howard Brazee
2003-10-30 15:40:13 UTC
Permalink
Post by Peter E.C. Dashwood
My own opinion on COPY (when used in the procedure division) is that it is
an abomination.
I believe there's a place for this. All of my IDMS programs have a standard
IDMS copy that does error checking after IDMS commands. They get called a LOT,
and are always the same.

I have seen copied abort paragraphs, and even copied declaratives. (plus a lot
of copied code that I agree is unwarranted or worse).

Not as useful - we used Y2K date conversion copy members.
jce
2003-10-31 07:59:09 UTC
Permalink
I use them because I have a nested module where there is a fixed
multidimensional array.

The problem is that the fixed sizes are different in each instance so I have
a copy replace SIZE-1 BY xx and SIZE-2 BY yy and SIZE-3 BY zz.... If I
chose standard linkage I would have to have occurs depending on with occurs
depending on with occurs depending on.

It might be an abomination but it was very efficient and people can now do
my very important function that is required always by just COPY replace.....

I'd be interested to know how else you could have done this in COBOL in a
manner that was as quick for people to pick up. What's nice is that people
now can use this and get output that they are expecting and understand but
have no idea how it is working (in both the called module and the data they
are populating. Just by following a setup/use/call which is almost all cut
and paste that is essentially creating a number of pointers that they later
use).

I must admit that I have also misused this capability - just not realizing
it at the time...now it's a pain to clean up.


JCE
Post by Peter E.C. Dashwood
Henry,
the posts here are offering good advice and you can see the consensus is in
favour of CALL.
If you intend to go to OO COBOL you will obtain all the advantages of a
dynamic calling environment without some of the disadvantages (see post from
Bill Klein).
Personally, if I couldn't use OO, I would (and have...) use a dynamic
calling environment. This provides the most flexibility and ease of
maintenance and (for me at least) the overheads have never been a problem.
(There is no need to even re-link applications when a module changes.
Instead you can spend the time on drawing up your regression testing
plan...<G>)
My own opinion on COPY (when used in the procedure division) is that it is
an abomination.
To me this is exactly the same as NOT using PERFORM for code that is used in
several places in a program.
There was a time (long, long, ago...) when it was necessary to do this
because there were few other options and a COPY ensured that common code
remained consistent across the programs that used it. In this day and age
that is just nonsense.
It also encourages the use of REPLACING to make it look like it was tailored
for a specific circumstance and is not REALLY just duplicate code....
A better solution is to write one set of generalised code and dynamically
call it.
Nested programs are a good solution if you are really worried about dynamic
CALL overheads.
(But you shouldn't be. Despite Bill's alarming 16 - 100% on CALL overhead,
you will never notice this. If something takes 10 milliseconds or 20, a
human certainly won't tell the difference. The only time when this MIGHT be
a consideration would be when processing in a batch environment, where you
are incurring the extra 10 milliseconds possibly millions of times. Usually
Batch jobs are running unattended and are expected to take a long time
anyway...)
Go with CALL. There is really NO argument to use Procedure includes/COPYs
any more.
The best solution of all is to use OO COBOL. You get the lot and you can
wrap your Classes into COM components and re-use them anywhere.
Pete.
Post by henry weesie
We are using CobolII in a idms mainframe.
I'm told that using copy-members should generate less cpu than using
subprograms and therefore the use of copy-members should be preferred
above the use of subprograms.
So far my employer uses copy statements to copy source into each
program. However they are open for other structures like using
call-statements.
I expect more advantages of using calls instead of copy's in respect
of maintenance. No compile of all programs using copy when source in
the object is changed, no separate declarations en subroutines when
using calls.
But much more expensive CPU?
I'd like to have a strong case in trying to set up some form of object
oriented programming in Cobol II.
What are your experiences in the use of calls versus copy's?
Jeff Lanam
2003-10-30 21:18:29 UTC
Permalink
Post by henry weesie
We are using CobolII in a idms mainframe.
I'm told that using copy-members should generate less cpu than using
subprograms and therefore the use of copy-members should be preferred
above the use of subprograms.
From the standpoint of a compiler writer, large and complex programs
are harder to generate optimal code for than small, straightforward
programs. Whatever you gain in eliminating CALL instructions, you
may lose because the compiler can't optimize the generated code.
Things like register spilling are more likely, while some
optimizations may be impossible.

Large programs are also more likely to expose compiler bugs. You
are more likely to run into architectural limits such as the
maximum number of nested IF statements or PERFORM operations.
These of course vary from compiler to compiler.

Structure your code into managable, logical chunks using subprograms,
not copybooks.


Jeff Lanam jeff.lanam at hp.com
COBOL for HP NonStop Systems
Hewlett-Packard
INCITS/J4 COBOL Committee member
henry
2003-11-28 10:17:48 UTC
Permalink
Thanks for your attributions. Unfortunately I lost the debat.
Calls cost to my IO and should only be used for large, complicated programs.
Alas, 2 were in favor, 4 against.

Loading...