[HECnet] Old protocols in new ones

Johnny Billquist bqt at softjar.se
Sun Mar 28 23:25:36 PDT 2021


On 2021-03-28 23:08, Paul Koning wrote:
> 
> 
>> On Mar 28, 2021, at 4:40 PM, Johnny Billquist <bqt at softjar.se> wrote:
>>
>>
>>
>> On 2021-03-28 22:10, Paul Koning wrote:
>>>> On Mar 28, 2021, at 3:33 PM, Johnny Billquist <bqt at softjar.se> wrote:
>>>>
>>> ...
>>
>>> Multinet is quite another matter.  It claims to be a point to point datalink, but it doesn't obey any of the clearly written requirements of a point to point datalink.  That failure to conform is the reason it doesn't work right.  In the UDP case that is more obvious, but both versions are broken, it's just that the TCP flavor doesn't show it quite so often.
>>
>> Technically, TCP should work just fine. But yes, Multinet even have some funny specific behavior making even TCP a bit tricky.
> 
> No, technically Multinet TCP does NOT work fine.  The issue is that Multinet, whether over TCP or over UDP, fails several of the requirements imposed on point to point datalinks by the DECnet routing spec.  In particular, it fails the requirement of "restart notification".  In the TCP case, it can be hacked to make it work most of the time, but architecturally speaking it's flat out wrong.
> 
> The issue is that there is more to a point to point datalink that merely delivering a packet stream from point A to point B.  That's the main job, of course, but that by itself is not sufficient in the DECnet architecture.

I'm not sure what you think the problem is here. This would be 
interesting to explore.

Restart notification, in my ears, seems to be about informing DECnet if 
the link had to be restarted. Which, I would say, obviously can be done 
with Multinet over TCP, because this is detected/handled by TCP just 
fine. And if the TCP connection is lost, all you need to do is just 
inform DECnet that the link went down. And I do exactly that with TCP 
connections.

What else is there, that you think is a problem?


My problem, on the other hand, is that Multinet isn't just setting up 
the TCP connection, and then let things run. Multinet seems to 
explicitly want also one side to be the first to start communicating, 
and seems to drop packets if things don't happen in the order Multinet 
thinks they should happen. Which is really annoying, since DECnet should 
just be let alone to sort it out. So I've had to start trying to play 
more like Multinet, designating one as the active, and the other as the 
passive, and at connection time, delay signalling based on this, in 
order to play nicely with VMS.

>> It's really annoying how they seem to have gone out of their way to make it harder. But we've talked about this before.
>>
>>> It would have been easier and far more correct to model the Multinet datalink as a broadcast link, just like GRE did.  Unfortunately we can't do that unilaterally because the routing layer behavior for those two subtypes is different -- different hello protocols and different data packet headers.  So we're stuck with the bad decision the original fools made.
>>
>> Right.
>> And we can't do it anyway, since the existing VMS implementation is the way it is, and won't change.
>> If I want to, I could do something on the RSX side which would be more appropriate, and you could obviously do that also in PyDECnet. But we'd still have to deal with the broken VMS ways...
>>
>> But I wouldn't actually model it according to ethernet. For a TCP connection, a ptp link is the perfect way to look at it.
>>
>> And I wouldn't probably use UDP at all. But if I would, then yes, it would be modelled as a broadcast link, even though there would only ever be two points to it.
> 
> There are two ways to make a "data link similar to Multinet that actually works".  One is to run it over TCP and include some mechanism to deliver the "restart detection" requirement.  The other is to run it over either TCP or UDP and call it a broadcast link, i.e., a GRE lookalike.

Well, your restart mechanism is something I do not understand.
I would just run the TCP connection straight as it is. If the connection 
is closed, for whatever reason, this should be signalled to DECnet just 
like DDCMP signals link errors, and when the connection is established, 
data just flows again.
It should work just fine without any further stuff. This is pretty close 
to what I do today, with the unfortunate extra fluff required since 
Multinet don't like if the "wrong" side talks first.

> The former would be vaguely like the old "DDCMP emulation" in SIMH 2.9 that I once had in PyDECnet but removed -- the one that sends DMC payloads over TCP.  That might be worth doing if people were interested in deploying it.  I could dust of that old code to serve as the template to follow.

I would not send DMC payloads. Seems like an unnecessary extra 
step/layer with no added value.

> The latter would be like GRE, but it wouldn't really do anything that GRE doesn't do just as well so there wouldn't be a whole lot of point in it.

GRE basically just encapsulate ethernet packets anyway. (Yes, I know it 
can encapsulate other things, but anyway...)

>> ...
>>> Actually, the SIMH implementation, and therefore the PyDECnet one, does not do so.  That's different from Multinet, which comes with a connect vs. listen setting.  In the case of DDCMP, both endpoints connect and both listen.  If a connection is made the other pending one is abandoned.  If both connections are made at essentially the same time, things get a bit messy, which is why I don't really like this technique.
>>
>> Yeah. I wouldn't do it that way. That is ugly.
>> Better to designate one as the connector and the other as the listener.
> 
> True, and if it weren't for the fact that SIMH set the rule differently that's what I would have done.

So much baggage...

>> I mean, really. This was already regarded in that same way already in the real world back then. Dial up modems as well as X.25 are the exact same type of thing.
>> Both sides cannot initiate the connection. One is the initiator, and the other the one accepting the connection.
> 
> X.25 requires it because it has what many of us consider a protocol design error, 2-way connection handshake.  DDCMP (and others) uses a 3-way handshake and deals just fine with simultaneous startup.  The only reason the SIMH scheme has a problem is that you don't want to have two connections remain open.  But you can see that DDCMP doesn't mind from the fact that over UDP it has no problem with simultaneous startup.  The rules of the startup state machine explicitly cover that case.

This led me to realize that Multinet more or less tries to do a 2-way 
handshake.

But sure, X.25 is nobodys favorite. :-D

   Johnny

-- 
Johnny Billquist                  || "I'm on a bus
                                   ||  on a psychedelic trip
email: bqt at softjar.se             ||  Reading murder books
pdp is alive!                     ||  tryin' to stay hip" - B. Idol


More information about the Hecnet-list mailing list