<html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class=""><br class=""><div><blockquote type="cite" class=""><div class="">On Mar 27, 2021, at 4:51 PM, Thomas DeBellis <<a href="mailto:tommytimesharing@gmail.com" class="">tommytimesharing@gmail.com</a>> wrote:</div><br class="Apple-interchange-newline"><div class="">
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" class="">
<div class=""><p class="">I believe our site had three situations. Our DN20's had both
KMC's and DUP's. The KMC's were used for local computer center
communications of 10 to 30 feet (the 20's were running in a star
configuration). The DECnet of the time was reliable enough until
you put a continuous load on the KMC's. If you wanted to
guarantee transmission, then you had to write an application level
protocol on top of that to declare a connection down and to
renegotiate. This still works with the inconsistent NI
implementation on KLH10.<br class=""></p></div></div></blockquote><div>The KMC was a microprocessor (same or similar to the one in the DMC-11) which sat on the Unibus and and converted the device under control to look like a DMA device to the host system. Microcode was written to support KMC/DUP, KMC/DZ and KMC/LP although I don’t know if that last one ever made it into customer hands. The main problem with using the KMC was that it had to poll the device it was controlling and would burn through Unibus cycles even when there was no traffic.</div><div><br class=""></div><div> John.</div><div><br class=""></div><blockquote type="cite" class=""><div class=""><div class=""><p class="">
</p><p class="">I do not recall whether we ever determined where the problem was;
the Tops-20 DECnet III implementation of the time, the DN20 (MCB)
or the KMC. I don't recall what the KMC had for hardware error
detection; since the lines were synchronous, I would imagine it
would flag certain framing errors, at a minimum. The DN20's were
left largely unused once we got NI based DECnet, whose performance
blew the KMC right out of the water. The CI speeds were just
about incomprehensible for the time. We put timing code into our
transfer application, but I can't remember the speeds but they
were highly admired.</p><p class="">We had considered removing the DN20's, but conservatively kept
them as a fail-back in the case of a CI or NI outage. The other
reason was for the 'non-local' lines that ran to other parts of
the campus and long distance to CCnet nodes. These both used
DUP's. I can't remember how the campus lines were run, but the
long distance was handled by a pair of 9600 baud modems on leased
lines. I can't remember how the modem was configured; synchronous
or asynchronous.<br class="">
</p><p class="">In the case of the DUP's, there were plenty of errors to be had,
but they were not as highly loaded as the KMC's, which were
saturated until the NI and CI came along.<br class="">
</p><p class="">The third case was that of the DN65 to talk to our IBM hardware.
That was data center local and ran a KMC. The protocol that was
spoken was HASP bi-sync. I don't recall what the error rate was;
HASP would correct for this. A bucketload of data went over that
link. While I had to mess a tiny bit with the DN60 PDP-11 code to
change a translate table entry, I don't remember that I had to
fiddle with the DN60 that much. IBMSPL was another thing
entirely; we were an early site and I had known one of the
developers at Marlboro. It had some teething problems, but
eventually was fairly reliable.</p><p class="">I was wondering whether the problem of running DDCMP over UDP
might be one of error timing. If you blew it on a KMC or DUP, the
hardware would let you know pretty quick; milliseconds. The
problem with UDP is how soon you declare an error. If you have a
packet going a long way, it might take too long to declare the
error. It's a thought, but you can get delays in TCP, too, so I'm
not sure if the idea is half-baked.<br class="">
</p>
<div class="moz-cite-prefix">On 3/27/2021 1:59 PM, John Forecast
wrote:<br class="">
</div>
<blockquote type="cite" cite="mid:4F9AFE47-9EEB-43F8-AA63-5646CA1D1ED1@forecast.name" class="">
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" class="">
<br class="">
<div class="">
<blockquote type="cite" class="">
<div class="">On Mar 27, 2021, at 11:06 AM, Mark Berryman <<a href="mailto:mark@theberrymans.com" class="" moz-do-not-send="true">mark@theberrymans.com</a>>
wrote:</div>
<br class="Apple-interchange-newline">
<div class="">
<meta http-equiv="Content-Type" content="text/html;
charset=UTF-8" class="">
<div style="word-wrap: break-word; -webkit-nbsp-mode: space;
line-break: after-white-space;" class="">DDCMP was
originally designed to run over intelligent synchronous
controllers, such as the DMC-11 or the DMR-11, although it
could also be run over async serial lines. Either of
these could be local or remote. If remote, they were
connected to a modem to talk over a circuit provided by a
common carrier and async modems had built in error
correction. From the DMR-11 user manual describing its
features:
<div class="">DDCMP implementation which handles message
sequencing and error correction by automatic
retransmission</div>
<div class=""><br class="">
</div>
</div>
</div>
</blockquote>
<div class=""><br class="">
</div>
No. DDCMP was designed way before any of those intelligent
controllers. DDCMP V3.0 was refined during 1974 and released as
part of DECnet Phase I. The customer I was working with had a
pair of PDP-11/40’s, each having a DU-11 for DECnet
communication at 9600 bps. DDCMP V4.0 was updated in 1977 and
released in 1978 as part of DECnet Phase II which included
DMC-11 support. The DMC-11/DMR-11 included an onboard
implementation of DDCMP to provide message sequencing and error
correction. Quite frequently, customers would have a DMC-11 on a
system communicating with a DU-11 or DUP-11 on a remote system.</div>
<div class=""><br class="">
</div>
<div class=""> John.</div>
<div class=""><br class="">
<blockquote type="cite" class="">
<div class="">
<div style="word-wrap: break-word; -webkit-nbsp-mode: space;
line-break: after-white-space;" class="">
<div class="">In other words, DDCMP expected the
underlying hardware to provide guaranteed transmission
or be running on a line where the incidence of data loss
was very low. UDP provides neither of these.</div>
<div class=""><br class="">
</div>
<div class="">DDCMP via UDP over the internet is a very
poor choice and will result in exactly what you are
seeing. This particular connection choice should be
limited to your local LAN where UDP packets have a much
higher chance of surviving.</div>
<div class=""><br class="">
</div>
<div class="">GRE survives much better on the internet
than does UDP and TCP guarantees delivery. If possible,
I would recommend using one these encapsulations for
DECnet packets going to any neighbors over the internet
rather than UDP.</div>
<div class=""><br class="">
</div>
<div class="">Mark Berryman<br class="">
<div class="">
<div class=""><br class="">
<blockquote type="cite" class="">
<div class="">On Mar 27, 2021, at 4:40 AM, Keith
Halewood <<a href="mailto:Keith.Halewood@pitbulluk.org" class="" moz-do-not-send="true">Keith.Halewood@pitbulluk.org</a>>
wrote:</div>
<br class="Apple-interchange-newline">
<div class="">
<div class="WordSection1" style="page:
WordSection1; caret-color: rgb(0, 0, 0);
font-family: Helvetica; font-size: 12px;
font-style: normal; font-variant-caps: normal;
font-weight: normal; letter-spacing: normal;
text-align: start; text-indent: 0px;
text-transform: none; white-space: normal;
word-spacing: 0px; -webkit-text-stroke-width:
0px; text-decoration: none;">
<div style="margin: 0cm 0cm 0.0001pt;
font-size: 11pt; font-family: Calibri,
sans-serif;" class="">Hi,<o:p class=""></o:p></div>
<div style="margin: 0cm 0cm 0.0001pt;
font-size: 11pt; font-family: Calibri,
sans-serif;" class=""><o:p class=""> </o:p></div>
<div style="margin: 0cm 0cm 0.0001pt;
font-size: 11pt; font-family: Calibri,
sans-serif;" class="">I might have posted
this to just Paul and Johnny but it’s
probably good for a bit of general
discussion and it might enlighten me because
I often have a lot of difficulty in
separating the layers and functionality
around tunnels of various types, carrying
one protocol on top of another.<o:p class=""></o:p></div>
<div style="margin: 0cm 0cm 0.0001pt;
font-size: 11pt; font-family: Calibri,
sans-serif;" class=""><o:p class=""> </o:p></div>
<div style="margin: 0cm 0cm 0.0001pt;
font-size: 11pt; font-family: Calibri,
sans-serif;" class="">I use Paul’s excellent
PyDECnet and about half the circuits I have
connecting to others consist of DDCMP
running over UDP. I feel as though there’s
something missing but that might be
misunderstanding. A DDCMP packet is
encapsulated in a UDP one and sent. The
receiver gets it or doesn’t because that’s
the nature of UDP. I’m discovering it’s
often the latter. A dropped HELLO or its
response brings a circuit down. This may
explain why there’s a certain amount of
flapping between PyDECnet’s DDCMP over UDP
circuits. I notice it a lot between area 31
and me but but much less so with others.<o:p class=""></o:p></div>
<div style="margin: 0cm 0cm 0.0001pt;
font-size: 11pt; font-family: Calibri,
sans-serif;" class=""><o:p class=""> </o:p></div>
<div style="margin: 0cm 0cm 0.0001pt;
font-size: 11pt; font-family: Calibri,
sans-serif;" class="">In the old days, DDCMP
was run over a line protocol (sync or async)
that had its own error correction/retransmit
protocol, was it not? So a corrupted packet
containing a HELLO would be handled at the
line level and retransmitted usually long
before a listen timer expired?<o:p class=""></o:p></div>
<div style="margin: 0cm 0cm 0.0001pt;
font-size: 11pt; font-family: Calibri,
sans-serif;" class=""><o:p class=""> </o:p></div>
<div style="margin: 0cm 0cm 0.0001pt;
font-size: 11pt; font-family: Calibri,
sans-serif;" class="">Are we missing that
level of correction and relying on what
happens higher up in DECnet to handle
missing packets?<o:p class=""></o:p></div>
<div style="margin: 0cm 0cm 0.0001pt;
font-size: 11pt; font-family: Calibri,
sans-serif;" class=""><o:p class=""> </o:p></div>
<div style="margin: 0cm 0cm 0.0001pt;
font-size: 11pt; font-family: Calibri,
sans-serif;" class="">I’m having similar
issues (at least on paper) with an
implementation of the CI packet protocol
over UDP having initially and quite fatally
assumed that a packet transmitted over UDP
would arrive and therefore wouldn’t need any
of the lower level protocol that a real CI
needed. TCP streams are more trouble in
other ways.<o:p class=""></o:p></div>
<div style="margin: 0cm 0cm 0.0001pt;
font-size: 11pt; font-family: Calibri,
sans-serif;" class=""><o:p class=""> </o:p></div>
<div style="margin: 0cm 0cm 0.0001pt;
font-size: 11pt; font-family: Calibri,
sans-serif;" class="">Just some thoughts<o:p class=""></o:p></div>
<div style="margin: 0cm 0cm 0.0001pt;
font-size: 11pt; font-family: Calibri,
sans-serif;" class=""><o:p class=""> </o:p></div>
<div style="margin: 0cm 0cm 0.0001pt;
font-size: 11pt; font-family: Calibri,
sans-serif;" class="">Keith</div>
</div>
</div>
</blockquote>
</div>
<br class="">
</div>
</div>
</div>
</div>
</blockquote>
</div>
<br class="">
</blockquote>
</div>
</div></blockquote></div><br class=""></body></html>