A few items to fill in what's been said before:
1) Basic SYSPLEX
Shared DASD (supporting reserve/release), Point-to-point CTC connections between all systems, and "common time base". A common time base may either be one or more Sysplex timers (a dedicated clock), or emulation of same if all members run on the same physical processor in LPARs. As of early versions of OS/390, you need a BASIC SYSPLEX to share the JES2 spool. Earlier versions only needed shared DASD. A component called "XCF" provides communication services between members of the Sysplex.
2) Parallel SYSPLEX
Basic SYSPLEX, plus coupling facility. At one time there was a dedicated box that was sold as a CF, or you could use "Integrated Coupling Facility", which was an LPAR on a normal box. Nowadays, IBM just sells generic boxes configured however you need them. Our current Couplers are z890's with no normal channels. Either way, the CF runs CFCC ("Coupling Facility Control Code"). You define a CF LPAR in your IOCP (which defines what devices are where, and what LPARs there are), and when you activate the LPAR, the CFCC is automatically loaded. From that point on, it's a black box, and normally needs no attention, except monitoring to make sure you don't fill the storage. If that happens, the whole works usually comes down.
OS images communicate with Coupling Facility via Coupling Links. These are special channels that use either high-speed fiber, or for very short distances (<200 feet, IIRC), you can use ICB links, which are parallel copper cables. Coupling links are defined as a special channel type, but otherwise look like just another I/O device. All systems in the plex need to be connected to the CF.
The coupling facility contains "structures", which are basically data elements shared between the systems in the plex. You define the structures in a parmlib member, they're built at IPL time, both in the CF and in a shared dataset, and filled when used. Common uses for CF structures are:
1) DB2 data
2) GRS (inter-system enqueue/locking) data when you're running in GRS "Star" mode.
3) Shared Master catalog (recent innovation)
4) MQ Series Queues.
5) JES2 checkpoint (I think)
The point of this is to provide a sort of big shared RAM disk that can be accessed very quickly, and will survive the loss of one or more OS images. A normal configuration duplicates everything, so there will be two CF's connected to all images. You can duplex structures between CFs, or use commands to move them back and forth at will. If a CF fails, but at least one OS image is still active, it can usually reconstruct the lost structures in the backup CF.
One more note on CTC's:
The old parallel (bus/tag) CTC was a hardware device, but nobody uses those anymore. ESCON and (newer) FICON CTC's are just a fiber patch cable connected between channels, or through a switching device. The magic is handled in the channel microcode. Basically one side pretends to be a controller (CHPID Type CTC), while the other side acts as a channel (CHPID type CNC). So any emulation of this would have to include emulation of the controller behavior, which I don't believe is very well documented outside of IBM (although it has been available to OEM CPU vendors).
Final disclaimer:
ALL of this depends on a level of software support that's beyond what we can legally run on Hercules (which is a shame). I think the only LEGAL use of real CTC support might be VTAM-to-VTAM connection, or JES NJE. CF support is right out, unless someone can reverse engineer the coupling link protocol and CFCC behavior, but even then there's no OS we can legally use to talk to it. The ONLY OS that currently supports CF is zOS.
One way to collect data about how the old bus/tag CTC works might be to run two copies of MVS under VMR6 with a VTAM-to-VTAM CTC connection between them. I don't recall if the version of VTAM we have supports this, or if VMR6 does either, but I believe VM has some tracing facilities that might tell us something.
-----Original Message-----
From: Enrico Sorichetti [mailto:e.sorichetti-nc/***@public.gmane.org]
Sent: Wednesday, February 09, 2005 11:05 AM
To: H390-MVS-***@public.gmane.org
Subject: [H390-MVS] SYSPLEX and LPAR Re: racf and tso
Post by fausap72Ok... now it's still more clear.
So just to add another word :-) if I understood the SYSPLEXing
done
Post by fausap72via CF uses two (or more than two) LPARs, because we're talking of
the
Post by fausap72same piece of hardware.
SYSPLEX is a very large and foggy category of things
for example from a certain release od os/390 the system will/must come
up in MONOPLEX mode ( sysplex with only one CPC )
even in os/390 2.9 a prerequisite for the installation of the cics TS
was the system logger which had as a prerequisite the base sysplex
functions... yes even for a monoplex a couple dataset was needed
Let' make a little ouline of what is the hardware involved and the use
of it
CTC ( channel to channel adapter )
used for vtam and GRS ( ring )
( no shared spool for jes2 from 2.something)
SYSPLEX timer
the name tells it... clock sync for all the cpc' s
( central processing complexes - from a logical point of view )
( hardware and cables )
needed for JES2 MAS
two LPARS which emulate from a logical point of view two cpc' s
may use an internal feature I do not remember the name
( only from a certain model upwards )
then it comes the queen of the dance
the CF or coupling facility,
(I do not remember if it used special cables )
it' really a true cpu with central storage and microcode...
its a CPC with a microcoded operating system the "LIC"
the CF contains the control structures wich is a more sophisticated
name for "data buffers"
GRS ( star )
here just for chatting.... is an unnamed reference
********************************************************************
excerpt fom one of the Cheryl Watson Bulletins
********************************************************************
A reader (who prefers to remain anonymous) sent
in the following results of his conversion from GRS
Ring to GRS Star (requires a coupling facility). The
results were pretty impressive:
"Thought you might like to see some numbers from
converting a five member GRS ring (with RESMIL
5) to a GRS star using a 9674-C05 processors.
"Running a program that did a STCK, ENQ, STCK,
DEQ, STCK on a unique QNAME-RNAME as a
batch job I saw the following numbers:
"In a ring: Mean ENQ time 55.1 milliseconds, mean
DEQ time 51.8 ms, min ENQ time 38.2 ms, min
DEQ time 31.7 ms, max ENQ time 73.3 ms, max
DEQ time 112.2 ms.
"In a star: Mean ENQ time .9 milliseconds, mean
DEQ time .6 ms, min ENQ time .5 ms, min DEQ
time .3 ms, max ENQ time 2.2 ms, max DEQ time
.96 ms.
"Average ENQ improvement 59 to 1, average DEQ
improvement 90 to 1
*******************************************************
in a data sharing environment the db2 buffers are kept in CF
structures
same thing for example if you want vsam data sharing with integrity
you have to use vsam buffers in CF structures
for optimum performance of JES2 MAS the checkpoint buffers should
be in CF structures
same thing for the catalog buffers for the CAS
The terminology in not very accurate, since a lot of time has passed
since I was involved in SYSPLEX activity
but should be enough to give an idea ( trhu examples ) of the
functions of the components involved
regards
enrico sorichetti
P.S. it's a pity ...
I have the teeths but I do not have the bread ( a Z/OS to play with )
Yahoo! Groups Links
--------------------------------------------------------
If you are not an intended recipient of this e-mail, please notify the sender, delete it and do not read, act upon, print, disclose, copy, retain or redistribute it. Click here for important additional terms relating to this e-mail. http://www.ml.com/email_terms/
--------------------------------------------------------
------------------------ Yahoo! Groups Sponsor --------------------~-->
In low income neighborhoods, 84% do not own computers.
At Network for Good, help bridge the Digital Divide!
http://us.click.yahoo.com/EA3HyD/3MnJAA/79vVAA/sVPplB/TM
--------------------------------------------------------------------~->
Yahoo! Groups Links
<*> To visit your group on the web, go to:
http://groups.yahoo.com/group/H390-MVS/
<*> To unsubscribe from this group, send an email to:
H390-MVS-unsubscribe-***@public.gmane.org
<*> Your use of Yahoo! Groups is subject to:
http://docs.yahoo.com/info/terms/