Well at the top of the manual, the list of constraints[1] reads the same as the actor model[2] (it explicitly uses local state ('store') to model behaviour changes), with the word 'machine' substituted for 'actor'.
If anything it reads like the restricted form of the actor model produced by using only Erlang's gen_fsm or Akka's FSM mixin.
(to be clear, using a restricted form with more constraints is a great thing - even better here, where one of the domains they're serving seems to be fairly restricted execution environments)
[1]:
> Each operation either updates the local store, sends messages to other machines, or creates new machines.
> In P, a send operation is non-blocking; the message is simply enqueued into the input queue of the target machine.
This is really neat. The synchronous languages like Esterel have been doing great in real-time, safety-critical systems. As paper notes, the common problems in distributed and operating systems are more asynchronous in nature. Plus, modeling/verification and programming usually use separate tools. They tackle both jobs with one tool for asynchronous programming that proved itself out finding hundreds of bugs in a USB3 stack that's shipping to a wide audience.
Programming languages rarely get better introductions and early results than that. Props to the Microsoft Research team on this.
As mentioned the other day, it is this kind of efforts why I consider Microsoft and to certain extent Apple, the ones driving for innovation in safety on mainstream OS stacks.
Maybe Google as well due to their reluctance to improve NDK beyond being a way to implement Java native methods and port code into Android.
Right now they're focusing these efforts mostly on the drivers and hypervisor. I'm not sure how much the kernel gets in terms of stuff like VCC. Just wait till the Midori stuff starts flowing into their stack. It will probably be at the app level for .NET in safety or performance improvements.
Calling a core language keyword "goto"? They're brave.
And I'm not convinced of the utility of baking their own C-like imperative language when they could have actually used real C. There are already too many C-like languages out there that are just different enough from C to be annoying.
> Calling a core language keyword "goto"? They're brave.
I thought so too, but it makes sense really. State machine only have one state and can only go to a new state from another - you don't call functions and return to the current state.
Oh, yeah, but the pogrom against labeled jumps is so much a part of software dev culture now that I'd expect that sequence of letters to elicit a negative response regardless of how appropriate they were.
Did you read the manual? They use their own little language which is imperative, but unlike C side effect free. To execute side effects it offers a way to implement machines in regular C. This is for example how 'Timer' is implemented in the manual.
A good use for it is to exit early from deeply nested loops, which normally would require several "break"s. Some languages (Java, for instance) support "named breaks", but they tend to be more confusing than just using a goto.
This is more or less the only time I would use a goto over similar constructions like switch/case or continue/break.
If you can rely on the developer to use gotos in clear patterns like that, then great. But if there is a recognizable pattern, it's an opportunity for the language to provide an alternative like with{} or try{}finally{}.
I agree with with{}, less so with try/finally. First of all, I think exceptions are at least as easy to get wrong in your application (or worse system) design as goto. And if you get them wrong, it's probably even harder to debug. Second of all, the resulting code is IMO far less readable.
To illustrate - which would you prefer:
a = None
try:
a = get_ressource_a()
b = None
try:
b = get_ressource_b(a)
process(a,b)
except BException as e:
handle_errors(e)
finally:
cleanup_b(b)
except AException as e:
handle_errors(e)
finally:
cleanup_a(a)
or
a = get_ressource_a()
if not a:
handle_errors()
return
b = get_ressource_b(a)
if not b:
handle_errors()
goto cleanup_a
process(a,b)
cleanup_b:
#do the cleanup
cleanup_a:
#do the cleanup
My dream language would have more or less Python's syntax, with dynamic, inferred, but strong types, no exception handling, and instead Swift's "optional" baked into the language and a very smart logging that I can peak into with the error handler. The error handler could be the only sideeffect allowed when a function is declared pure. I don't need goto if with{} works well with such a system.
It provides nearly everything you mention and covers the "finally" case with `defer`. Worth looking at - I re-implemented suckless' slock in Nim[2][3] and it was very pleasant experience.
Interesting, I never looked at it really. As a statically typed language, how does it handle complex objects, something like a document coming from a document based DB sent over JSON?
There's a JSON parser in the standard library (http://nim-lang.org/docs/json.html). It works by declaring discriminated union (called enum in Nim) of possible JSON types, declared here: https://github.com/nim-lang/Nim/blob/master/lib/pure/json.ni... You get a tree of nodes out of the parser and switch (lightweight pattern matching) on each node type. Nim does exhaustiveness checking, so you will never fail to handle some kind of JSON node by accident. You're forced to deal with nulls explicitly, either via pattern matching or convenience functions like `getOrDefault`.
It's not as nice as F# Type Providers, but it's workable. You could write something similar to Type Providers with macros (given a schema or sample), but I don't know if someone tried this already. With all the standard operators (like property access/assignment and toString equivalent) overloaded to support JSON, it feels quite natural. I wrote a little script consuming a JSON service, it looks like this for example: https://gist.github.com/piotrklibert/b2ba0774244bb7368748a3b...
Nim has its peculiarities and rough edges (it's not even at 1.0 yet), but it's expressive (many of the construct typically built-in in languages are implemented as libraries) and fast. C-level fast, without a huge runtime, so for things like this script it's 4x-16x (IIRC, when reading cached data from disk) faster than compiled F# version (on Mono), for example. My impression of the language is that it's pragmatic and flexible. Also opinionated, which may be both a good and a bad thing, depending on what are your preferences.
That does look quite reasonable yes. You're right, for a statically typed language it does feels quite dynamic. I think a good blog article comparing Nim to Swift would be very interesting, I couldn't find such a thing.
Thanks for your input!
Edit: Looking at http://roadfiresoftware.com/2015/10/how-to-parse-json-with-s... I think I'd much rather have Nim. This is exactly the sort of thing where types get in the way and slow development down tremendously. If I get external data it should be enough to just be aware that all accesses return an "optional", which has to be dealt with accordingly as part of error handling. Letting types get in the way as well is basically doing the work twice.
but goto does point to a label. By giving it a meaningful name you can convey more information than with other flow control statements (except for function calls).
On the other hand, I think I only used goto once in production in my career (to get out of a double nested loop). And it had a bug.
As usual, missing semantics/intention can be encoded in labels using some kind of naming conventions (or in comments). It is however also a dangerous approach because naming conventions are not controlled and not enforced by the language.
No, it's just plain goto, to a named label which is (for this case) later in the function.
It's quite common in low-level code where you want to do a bunch of things that can all fail, and you need to rewind and undo the things that succeeded up to the point of failure.
HANDLE_ERROR statements go at the beginning of the block, RESOLVEs go at the end, in the reverse order to the ENSUREs, which go wherever they're required.
Or just P for parallel. Safe, parallel, asynchronous programming in higher-level language with code generation. As opposite of BCPL and C as you can get.
Seems like Google handles unicode characters, so those would be as good as any (I guess as long as it's a symbol people actually know how to pronounce). Something like Ω-lang or ∞-lang would be nice.
D is clearly trying to break away from the BCPL > B > C heritage just as C++ didn't merit a new letter. Since D is trying to be more logical, it went with the logical incremental letter.
To be clear, P is used for validating the asynchronous state machines used by the Windows USB drivers, and not for implementing the drivers, which are written in C.
And the USB driver stack implementation is same for both Windows 8 & and Windows 10, so the answer to your question is - yes.
The README says: "P has been used to implement and validate the USB device driver stack that ships with Microsoft Windows 8 and Windows Phone". It's not clear from your answer. Does Windows ship with a "USB device driver stack" written in P, yes or no?
The way these normally work is that they do the state machine in the domain-specific language, verify it, auto-generate code in something like C, and compile that. That's almost all of them since it's easy to go from models to code automatically for state machines. The Github page says:
"Not only can a P program be compiled into executable code, but it can also be validated using systematic testing. P has been used to implement and validate the USB device driver stack that ships with Microsoft Windows 8 and Windows Phone. "
That indicates they modeled it in P with their tool compiling the P specs into some executable. The individual functions the state machine calls would be other C or assembly functions. Microsoft as tools like VCC, Verifast, SLAM, etc to verify C in drivers. I'm curious what combo of them they used on it if any.
For me it's a very interesting question whether they directly used the generated C code from the P toolchain in the driver or whether they only used P for verification of the state machines and reimplemented them in C.
Directly using the code would be a huge achievement and can help to avoid a lot of issues that will come with reimplementing it. However it has also quite huge requirements for the code generation. E.g. the scheduler must fit the target system and must be performant for the use-case, the infinite-queue semantics that the state machines seem to have are not ideal for a constrained environment and of course there's questions regarding memory allocation and garbage collection (which should mostly be avoided in drivers).
The event model is great. Forces inputs and outputs to be simple. I really like that P reads like English. The use of goto, on, in, send all read well. They use monitor but abbreviate function to fun, why not just write function?
I think it would be nice to drop the braces entirely. They seem to only function as decoration. Also I would really miss ++, P has many convenience methods including += but no ++.
It was amusing when my government teacher explained the urinal problem to women in the class in a discussion on "man laws." He first asked where they would go in a four, urinal situation if there was nobody present, a person at urinal 1, and people at 1 and 3. The men and women had very, different answers that were pretty consistent within each gender. The women always asked, "Well what do you do if 1 and 3 are taken if 2 and 4 aren't an option?" Every guy but me effectively said, "Try not to piss yourself until 1 or 3 become available."
"Nick, you aren't weirded out by people sneaking a peek?"
"Nah, my protocol is different. It recognizes root cause is them looking at what they don't need to. If I notice this, I'll simply remind them they should turn away from me before I turn something toward them. Not my eyes either. Most rational and irrational people will avoid that outcome to maintain what dignity they can when exiting a bathroom."
The guys got a kick out of that solution but chose not to upgrade their protocol. Their loss. ;)
Interesting. When I was studying Group algebra and machine organization, I felt there was a strong intersection between these two schools, and I think P is addressing that.
Yes, there should be some sort of RFC about programming language names. The main requirement would be to choose some combination of letters which return less than 1000 results on Google.
The most important thing in the programming language is the name. A language will not succeed without a good name. I have recently invented a very good name and now I am looking for a suitable language. - Donald Knuth
Is there a big difference here, or is this basically just an alternative language for that model?