back to article The future of Python: Concurrency devoured, Node.js next on menu

The PyBay 2017 conference, held in San Francisco over the weekend, began with a keynote about concurrency. Though hardly a draw for a general interest audience, the topic – an examination of multithreaded and multiprocess programming techniques – turns out to be central to the future of Python. Since 2008, the Python …

Page:

  1. casperghst42

    I do use python, but I still find it very silly that they refuse to implement a switch .. case statement, it causes one to end up with very clunky code.

    1. FluffyERug
      Happy

      Switch Case

      Utterly trivial to implement....

      See https://stackoverflow.com/questions/60208/replacements-for-switch-statement-in-python

      1. Pascal Monett Silver badge

        Trivial ?

        And which of the 10+ examples that absolutely do not function like Switch do you recommend ?

        1. FluffyERug

          Re: Trivial ?

          Actually [None] Switch/Case is a code smell, that can be completely done away with. Especially in languages such as Python.

          1. Roo
            Windows

            Re: Trivial ?

            "Switch/Case is a code smell"

            In this case: He who smelt it dealt it.

            This whole "code smell" thing has become utter bollocks. It is too often used to promote personal prejudice over and above sound engineering backed by objective reasoning, it's become a short cut for "I know better than you".

            As it happens the 'switch' statement is a way of representing a common assembler idiom of a 'jump table', which happens to be fairly efficient - and it rather handily tends to keep all the code local which means it'll fit inside a cache line or two if you are lucky/clever. Not using a switch statement where one would fit nicely would be a code stench in my view - and that's my personal prejudice, but at least I can back it up with some objective reasons why it can fit some scenarios better than the alternatives...

            If you want some code smells to work on I suggest you start with the JVM, then the Java libraries and work your way up to Spring. In the case of the JVM you start with a massive --ing runtime that takes an age (in machine terms) to start and consumes vastly more memory that it actually requires to operate - and it does this because it's the only way it can approach the speed of a compiled language.

            I'm hoping that would keep you busy enough to lay off on the switch statement. :)

          2. Kristian Walsh Silver badge

            Re: Trivial ?

            Switch/Case is a code smell, that can be completely done away with. Especially in languages such as Python.

            If it's a code smell, it's a smell of good design. Switch/case is designed to enforce small sets of values, especially when coupled with enumeration types. "Languages such as Python" don't enforce types at all (by default; I know about Python 3's type hints), so attempting to enforce values is a little meaningless.

            In languages that support it, A switch/case block is telling you something very important about the author's mental model of the code at the time they wrote it. It's saying "At this point in the program, I expect this variable to have one of this limited set of constant values"*

            If you're using enumerations, switch/case additionally allows the compiler to do coverage checking for you (with the warning that your switch block doesn't test for all possible cases).

            If/elif/else cannot convey that information.

            Saying something "can be completely done away with" is not a useful argument. Ultimately, all you need is 'if zero then goto' for any programming language, but filling your code with such constructs strips it of any hint of what the hell you were trying to achieve when you wrote it. There's a strong argument that whole point of having high-level languages in the first place is to capture the intentions of the programmer, because hand-optimised machine code is pretty opaque to a maintainer.

            * There are, sadly, exceptions: Swift's switch/case "value bindings" feature ignores the "limited and constant" nature of switch/case in an attempt to be "helpful", and in doing so reduces the structure down to a pretty-printed way of writing if/elseif/else. If you're using "clever" value bindings in Swift, you really should be using if-elseif-else, because all you're doing with value bindings is hiding one kind of test, if, (i.e., "evaluate expression and compare result") within the language structure, switch/case, normally used for a different kind of test.

            1. Robert Grant

              Re: Trivial ?

              "Languages such as Python" don't enforce types at all (by default; I know about Python 3's type hints), so attempting to enforce values is a little meaningless.

              Python is strongly typed, and enforces those types. Google the difference between strong/weak and static/dynamic typing - it's a good education in programming basics that code boot camps don't always cover. Happy learning! :-)

              1. Kristian Walsh Silver badge

                Re: Trivial ?

                You could have made that objection without coming across as a condescending git, you know.

                I was discussing the type-enforcement features of the language itself: Enumeration types and swich-case illustrate an advantage of time-of-compilation ("static") type knowledge, versus time-of-execution ("dynamic") type knowledge.

                As I was talking about the Python language, it's entirely correct to say that there's no enforcement of data types, because the language itself has no concept of expected types for function arguments. And, while you are also entirely correct that the Python runtime enforces datatypes, that's too late for any feature, such as enumerations, that requires compile-time type knowledge.

          3. Orv Silver badge

            Re: Trivial ?

            I will grant that switch/case is easy to code wrong in some situations. (e.g., forgetting the break; statement, which can lead to subtle bugs.) For long lists of simple values, though, I feel they're a lot less cluttered to read than a long "if/then/else if..." block. The values being tested for end up buried in syntactic clutter in the middle of the line, making them harder to spot.

      2. Anonymous Coward
        Anonymous Coward

        Re: Switch Case

        I love the way the if .. else suggestion gets 167 upvotes.

        167 people who have no idea that the difference is and what the switch case is for ...

        1. kuiash

          Re: Switch Case

          It's for Duff's device isn't it?

          Switches inside loops with gotos and breaks & conditional continues FTW! I helped write a data analysis app back in the 80s. IIRC that's how we treated resampling. Why? I don't know. Guess we though it was clever!

          Oh. Switch is also useful in for breaking C/C++ compatibility.

        2. Brewster's Angle Grinder Silver badge

          Re: Switch Case

          "167 people who have no idea that the difference is and what the switch case is for ..."

          You'll have to enlighten me, then.

          Back in the day, I remember C compilers that would sometimes generates lookup tables for switch statements. But mainly they ended up as the asm equivalent of if-then statements.

          And while I'm here, Duff's device is known to be a performance handicap. Famously, removing numerous instances from the X server reduced code side and increased executable speed.

          1. Roo

            Re: Switch Case

            "And while I'm here, Duff's device is known to be a performance handicap"

            I think a lot of people neglected to pay attention to what Tom Duff was trying to achieve: specifically "loop unrolling" with a compiler that didn't do it - with a minimum of code.

            He was also counting cycles on some fairly exotic big iron - which had a very different set of strengths & weaknesses in comparison to *most* that followed it (eg: memory that keeps up with the core clock, small to zero caches and maybe a couple of cycles max for a memory fetch).

      3. Dan 55 Silver badge

        Re: Switch Case

        All you're showing is Stack Overflow at its worst.

        1. kuiash

          Re: Switch Case

          Here's Linus at his best ranting about boolean switch statement warnings...

          https://lkml.org/lkml/2015/5/27/941

          LOL!

    2. Charlie Clark Silver badge

      I do use language X, but I still find it very silly that they refuse to implement a Y statement

      Pretty much true of all programming languages. I write a lot of Python code and find dispatching much preferable to the SWITCH statement.

  2. Herby

    I'll wait...

    For python 4.

    It will happen someday, and then the 2/3 mess will be behind us. Until then, I'll keep my whitespace to myself, one tab at a time.

    1. Brewster's Angle Grinder Silver badge

      Re: I'll wait...

      The javascript split (ecmascript 4) held up the language for a decade. And there was also a lost decade between C++98 to C++11. (And I've lost track of what's happening with Perl 6.) What was it with the early 2000s?

    2. Ucalegon

      Re: I'll wait...

      No pun intended?

    3. foxyshadis

      Re: I'll wait...

      Good luck with that; PHP seems to be the only language interested in major versions anymore, and its major versions would be minor versions to any other language. Python is probably going to be asymptotically on 3 forever.

      1. Orv Silver badge

        Re: I'll wait...

        Perhaps, like TeX, they should have approximated pi more closely with each revision.

  3. Anonymous Coward
    Anonymous Coward

    RIP GIL

    They need to remove the GIL.

    It's very easy with event driven programming to accidentally block the event loop with a long running operation - and much like Windows 3 message loops, doing so blocks pretty much everything else, which is rather absurd in this day and age.

    So async programming works best when you can:

    1. Either have multiple concurrent event loops - say 1 thread per event loop, so that's Twisted out of the picture - but you do have to remember not to pass objects that won't be thread safe out of their originating thread,

    2. Or you're happy to push long running objects onto their own pool threads (somehow returning in a timely manner to the event loop if the pool is full) - again remembering thread safety.

    One starts to realise why COM marshalling got so complex.

    PS If anyone from the Twisted project is reading this, why did you have to make Deferred such an awful API? Any plans to fix it ever?

    1. thames

      Re: RIP GIL

      Ah yes, the GIL, the favourite whipping boy of people who have heard of Python but don't actually have much experience with it. There are four major independent implementations of Python which are intended for serious commercial use. Two have a GIL, and two don't. The two that don't have a GIL have (or had) major corporate backing, while the ones that do have a GIL did not. Oddly enough, people prefer the versions that have a GIL over the ones that don't by a huge margin. It seems that the GIL isn't a serious enough concern for people who actually write Python software to really be bothered about it.

      1. Anonymous Coward
        Anonymous Coward

        Re: RIP GIL

        Oooh Mr Clever, that told us.

    2. Charlie Clark Silver badge

      Re: RIP GIL

      You only need to remove the GIL for better parallelism (on multiple cores), asyncio does the job for concurrency. Of course, now that multicore environments are becoming ubiquitous, the need to use them effectively is increasing but processor locking has always had advantages.

      Larry Hastings gave an excellent talk last year on his attempts and progress on removing the GIL.

    3. Alan Johnson

      Re: RIP GIL

      Yes - who write this stuff both are necessary and neither replaces the other.

      I do not program in Python but the two concepts of event driven programming and multi-threaded or concurrent programming are seperate, do not address the same issues and are frequently complementary.

      The problem with event driven programming is that the event loop must not be blocked for a significant (whatever that means) length of time. It is necessary and normal therefore to have event queues etc and multiple threads to handle operations which are long and do not make sense to break up into smaller sub-operations. If realtime matters then having a single event queue and processing is a major issue because it imposes FIFO scheduling on the system which is almost certainly not appropriate.

  4. John Smith 19 Gold badge
    Unhappy

    So the 80's are back. Co-operative multi tasking.

    Because the Windows event loop showed it works soooooo well.

    1. Brewster's Angle Grinder Silver badge

      Re: So the 80's are back. Co-operative multi tasking.

      Windows wasn't so much cooperative multitasking as competitive multitasking, with every thread competing to see who could get the biggest slice of the processor. But within an app the coroutines are all under your control so you can bludgeon offenders into submission.

      It doesn't fit all use cases. But it can help in some situations.

      1. John Smith 19 Gold badge
        Unhappy

        " with every thread competing to see who could get the biggest slice of the processor."

        Until one of the procedures on the massive case statement (swallows all them messages and the whole system goes TITSUP).

        There's a reason Windows eventually went preemptive, other than NT being built by a team with experience of writing an actual production grade OS.

        1. Brewster's Angle Grinder Silver badge

          Re: " with every thread competing to see who could get the biggest slice of the processor."

          Cooperative multitasking was always a silly idea at the OS level. (Calling it "competitive multitasking" was meant to be disparaging.) And it wasn't hard to do; hell I wrote apps that did it internally on DOS and I'd come from eight bit micros that had it (OS9) so it was ludicrous Microsoft didn't do it (although, without hardware memory protection, it would've always be a crap shoot).

          At an application level, however, it is a very different beast. You want an app to work so a blocking coroutine is bug. But, as I say, it's not right for every situation. I've been playing with it for a good while in javascript and I still mix and match it with background threads.

  5. Adam 52 Silver badge

    Python 3 split over?

    From the PySpark documentation I was reading this morning:

    "PySpark requires Python 2.6 or higher. PySpark applications are executed using a standard CPython interpreter in order to support Python modules that use C extensions. We have not tested PySpark with Python 3 or with alternative Python interpreters"

    1. Ken Hagan Gold badge

      Re: Python 3 split over?

      Python3 is not *that* much of a change. Yes there are breaking changes, but none should trouble a competent programmer if the code is under active maintenance so if anyone is presenting code in 2017 and spreading FUD about 3 then you should avoid them. They do not understand their chosen implementation language and that is never going to end well.

      1. Anonymous Coward
        Anonymous Coward

        Re: Python 3 split over?

        @Ken Hagen "...Yes there are breaking changes, but none should trouble a competent programmer..."

        *

        I suppose this is true. I'm not a professional programmer, but back in the Python 1.5 days around the year 2000 I started writing a fair number of Python utilities and tools for my own use. All of these programs survived without much (or any) maintenance till around 2014. That was when Red Hat announced that Python3 was the go-to version for the future of Fedora (although they continued support for 2.7, and still continue that support today).

        *

        This change made me convert all my tools and utilities from Python 2.7 to 3.x. The 2to3 utility absolutely didn't find everything. Converting tkinter (windowed) programs was a REAL pain. Bottom line -- I spent a lot of my spare time over about a year doing the conversion. No tweaks or improvements just one-to-one functional conversion.

        *

        My beef is that this conversion provided me with ABSOLUTELY NO VALUE....everything looks and runs just the same as it always did. But Guido has got the print statement converted to print(). Yeh!!

      2. Anonymous Coward
        Anonymous Coward

        Re: Python 3 split over?

        That's the problem, Ken. Python3 made just enough breaking changes to annoying programmers, without fixing major design flaws.

        I still use Python (any version) for small things where it's convenient. The stability is nice. But I haven't taken it seriously as a language since the 3.0 release.

        1. Charlie Clark Silver badge

          Re: Python 3 split over?

          Python3 made just enough breaking changes to annoying programmers, without fixing major design flaws.

          While it's arguable that Python 3 did actually fix some (but not all) design flaws, doing so brought some unnecessary incompatibility (unicode) and and a considerable performance cost. However, since Python 3.5 performance is generally on a par with Python 2 and asyncio does offer new opportunities.

          Some systems will stick with Python 2 for as long as possible because they just work and the costs associated with migration far outweigh the benefits. But this is true of many systems and why virtualisation is so important.

          But for the last few years lots of projects have added Python 3 support and new ones are written exclusively for it. This means that newer programmers rarely face any problems.

          There are lessons to be learned from 2/3 and we can only hope that future changes in the language are handled with a greater understanding for the maintenance of existing libraries and applications. I think that the shift to time-based releases under Larry Hastings is evidence of this.

          1. foxyshadis

            Re: Python 3 split over?

            Programmers who consider Unicode an "unnecessary incompatibility" are the reason why so much software is fundamentally broken anytime it encounters anything that isn't Latin-1. I don't know about you, because you probably never had to touch foreign words or names at all, but Code Pages were a damned nightmare to anyone who actually wanted to do things right.

            It really isn't that difficult to figure out bytes vs strings. You guys have had 10 years to wrap your heads around it, and all you have to do is do the right thing. It's not like Python 2.7 is going anywhere, literally all you have to do is convert your shell files from calling python to python2 to make them work, but you're too incompetent to even do that!

            This is literally no different from the worthless sysadmins that still complain about Perl 6 and Linux 3, because it violates their comfortable safe space, and they just want to get paid to never have to learn anything ever again.

            1. Anonymous Coward
              Anonymous Coward

              Re: Python 3 split over?

              @foxyshadis, I've left you the space below in case you feel the need to rant about anyone else:

              <rant>

              .

              .

              .

              .

              .

              .

              .

              .

              .

              </rant>

            2. Charlie Clark Silver badge

              Re: Python 3 split over?

              I don't know about you, because you probably never had to touch foreign words or names at all

              Seeing as I live in Germany I have to do it a lot…

              I understand the difference between bytes and strings just fine but it wasn't until u"" was restored in Python 3.3 that porting from 2 to 3 felt less like shooting yourself in the foot. Keeping the literal around wouldn't have cost anything and would have kept a lot of goodwill and would undoubtedly have brought the ports of many projects forward.

    2. thames

      Re: Python 3 split over?

      I don't use Apache Spark myself, but there are apparently lots of people using it with Python 3. Python 2 is the default, but you select Python 3 by setting an environment variable.

  6. Kevin McMurtrie Silver badge

    Async not always easy

    Async I/O is not necessarily easier than multi-threading. Blocking I/O is trivially simple but it consumes a thread for an unknown duration. The workaround is multiple threads, and that's where it might become hard. Async I/O is tricky to stream because the control is reversed - you read data that is pushed to you and write data that is pulled from you. Coders can still create bugs by allowing events to complete while the program is no longer in a state to accept them. My preference is for both blocking and async mechanisms to be available since they have different advantages and disadvantages. I also like using async tasks for a lot more than I/O.

    An interesting note is that Jython supports threads. I used it for a while and there were few threading problems specific to the Python language itself. All it needs is some coordination classes for semaphores and piping data between threads. With machines easily having 32+ hardware threads, it's stupid to say that you need to launch 32+ copies of your app to use them.

    1. Anonymous Coward
      Anonymous Coward

      Re: Async not always easy

      You might find Actor model or Communicating Sequential Processes interesting. The latter is particularly good in my opinion, solving (well, highlighting) many of the theoretical problems that exist with systems using multi paths of executions. ZeroMQ is an excellent Actor Model formulation, has many very useful features to recommend it.

      A quick summery of the difference. In actor model programming (i.e. everyone's interpretation of async io), a sender can send a message with no knowledge as to whether the receiver has (or will) receive it. In CSP, the sender blocks until the receiver has read the message.

      Depending on what you're doing (e.g. a real time processing system), the "blocking" is not a problem. If it is, it simply means that the architecture is wrong, the implication being that there needs to be more receivers to share the workload. Actor model's asynchronicity sounds good, but really it just means that you disguise an inadequate architecture with latency...

      Another aspect in a complicated system is the potential for deadlock; circular dependencies and similar problems can easily be written into Actor and CSP systems. The difference between between actor and CSP is that an actor system may not deadlock until many years later when some network connections becomes a bit busy, whilst a CSP will guarantee to deadlock each and every time.

      Obviously if the system is doing a web-server like thing, then there's no need for any of this. In such a system, you just want to push messages out in the general direction of the client, and you don't actually care when / if / how it gets there.

      Danger of Missing an Opportunity

      As an old school programmer who grew up with parallel processing on Transputers in the early 1990s (= Communicating Sequential Processes), it's been jolly amusing to see the modern world rediscover stuff that was essentially all done back in the 1970s, 1980s. It's only gone and taken nearly 30 bloody years.

      The problem with this new async stuff in Python is that it's barely beginning to touch the surface of what has already been done elsewhere. This is a problem because no doubt a bunch of people will pick it up, a ton of stuff will get written, it will then not be changeable, and it ends up being a wasted opportunity for Sorting it Out Properly.

      This async stuff sounds like it'll be pretty lame in comparison to ZeroMQ. As a bare minimum they should look at ZeroMQ, pay attention to the different patterns it implements, and replicate them. Anything less than that is simply wasting future programmer's time.

      No Perfect Solution Yet

      I like ZeroMQ because of its clever patterns and shear just-gets-on-with-the-job no mucking about approach to joining threads / processes / machines together in a way that means you no longer really care about how it's done anymore. I like Rust and (probably) Go because they've actually gone and adopted Communicating Sequential Processes - a very good move.

      However, it's none of its still quite there. ZeroMQ bridges between threads / processes / machines and is admirably OS / language agnostic, but is Actor model, not CSP. Rust does CSP, but AFAIK the reach of a channel in Rust is stuck within the confines of the process; it won't go inter-process and it certainly won't go inter-machine. To me the ideal would be ZeroMQ, but one where the socket high water mark can be set to zero (which would then make it CSP.

      Generally speaking I actually use ZeroMQ, and something like Google Protocol Buffers on top (though I prefer ASN.1). That way you can freely mix different OSes, languages and hardware into a single system, whilst being able to deploy it on anything ranging from a single machine to a large cluster of machines. This level of heterogeneity is a fantastic when you're developing systems that you're not quite sure about what it'll end up looking like.

    2. bombastic bob Silver badge
      Devil

      Re: Async not always easy

      In programming lingos like Python, maybe, but I've been doing asynchronous things in C/C++ for decades.

      It's usually a matter of careful design, use of sync objects, etc. and, of course, background threads.

      However, having "all of that" in Python might be useful. Then again, it might be forcing Python to do things it shouldn't be used for...

      I ended up falling into a place where I have to update a DJango web site for a customer. Of course, so MANY things were so highly inefficient that I wrote a C utility to do the most time-consuming operations 30 times faster than before, and invoked it as an external utility from the existing Python code. Yes, I measured the performance difference. 30 times faster.

      The beauty of Python, though, is that it HAS those provisions built-in to invoke an external utility and return the stdio output as a string. I think Java painfully LACKS that kind of support, last I checked (I could be wrong, I'm not that familiar with Java). Anyway, it makes a LOT of sense.

      And that leads me to another point: perhaps it's a BAD idea to try and force Python to do things it shouldn't be used for in the FIRST place. Right?

      I've written my own customized web servers in C and C++ before, including a really small one that runs on an Arduino. So I think I can be a pretty good judge of "you're doing it wrong". Django is "doing it wrong".

      However, what Python seems to do REALLY well is allow you to quickly throw together a utility or a proof of concept application. Alongside shell scripts, Perl, and the occasional C language external utility, it's a nice addition to a computer that's used to "get things done".

      I'm not sure what threads and async I/O will actually do for anyone, in the long run. Maybe "nice to have" but if you're concerned about I/O performance, WRITE IT IN C OR C++.

      /me intentionally didn't mention C-pound (until now). The fact that I call it 'C-pound' is proof of why.

      1. david 12 Silver badge

        Re: Async not always easy

        >Alongside shell scripts, Perl, and the occasional C language external utility, it's a nice addition to a computer that's used to "get things done".<

        And it would be even better at that if it had resumable exceptions.

        Resumable exceptions make speed optimisation more difficult (not impossible, but more difficult). On the other hand, resumable exceptions enable finely-grained exception handling for i/o-bound exception-prone multi-threaded asynchronous processes that spend most of their time waiting anyway.

        And once you've written your first application with a separate try/catch block for every single line you've learned why resumable exceptions are not universally a bad idea.

      2. John Smith 19 Gold badge
        Coat

        "it's a BAD idea to..force Python to do things it shouldn't be used for in the FIRST place. Right?"

        'Nokay

        A remarkably balanced and sane PoV. A lesson that should be taught on all CS courses.

        But IRL...

        You get people trying to write an OS in FORTRAN.

        And then the fun begins....*

        *I know it's stupid. You know it's stupid. But the Board spent a shed load on that new (cross) compiler so it's going to get used.

        Let the death march begin.

      3. foxyshadis

        Re: Async not always easy

        Aside from shelling out, Python also has fully-working dll/so support, with the ctypes library or one of its pretty wrappers, saving even more overhead versus spinning up an executable and parsing its stdout. Practically all of the important libraries have cpu-intensive operations in compiled .pyd (which is just a dll/so), and quite a few wrappers exist to call out to standard libs.

    3. Yes Me Silver badge
      Mushroom

      Re: Async not always easy

      "It's insanely difficult to get large multi-threaded programs correct," Hettinger explained. "For complex systems, async is much easier to get right than threads with locks." That strikes me as absurd. Threads, queues and locks are easy to get right. The model is clear and the pitfalls are well known to every CS student. There are other things in Python that are much more tricky (the semantics of 'global', the absence of a clear difference between call by name and call by value, and of course sloppy types are but three examples).

      Event loops are a cop-out compared to real multi-threading. Tkinter is a good example of how not to do things properly. I haven't looked at syncio, but anybody who thinks Twisted is better than Python threading is... twisted.

  7. Stevie

    Bah!

    Cobol programmers were doing async decades ago.

    Get off my lawn.

    1. getHandle

      Re: Bah!

      You're a throw-back but have an upvote anyway. Currently overseeing guys doing it in C++... Feeling old!

    2. Ken Hagan Gold badge

      Re: Bah!

      PJ Plauger wrote an essay about 30 years ago in which he described the evolution of a programming model that is so standard that many readers of this forum might be unaware that there was ever any other.

      He noted that when operating systems first started being able to run multiple programs (Yes children, that was a thing once.) the OS designers naively offered pretty much all of the synchronisation primitives to user-space programers that they had themselves used to implement the operating system. This included stuff like async IO, signals, multiple threads of execution in a single address space, ... whatever.

      Very quickly they learned that their customers, the pleb user-space programmers, couldn't handle this. The solution was to create the abstraction of "your program owns the entire machine and there is only one thread in your program". The plebs could handle that and if the OS was cunning enough it could run several pleb programs and multi-task them against each other to maintain efficient resource usage.

      I can imagine that the abilities of some pleb programmers has gone up a little since then, but probably not enough to make it safe to encourage everyone to do everything async "just because they can".

      1. Mage Silver badge
        Unhappy

        Re: Bah!

        Did we abandon real computer science in the late 80s / early 90s in favour of "languages", "libraries" and "frameworks"?

        Have we abandoned compile time validation in favour of run time testing?

        Is discussing merits of Javascript vs python 2 vs python 3 missing the bigger picture?

        Async, threads, co-routines, mutex, signals, processes are all tools for design of systems with concurrency. Sometimes co-operative multitasking, pre-emptive or dataflow design is best approach. What works for user space on a desktop GUI may not be appropriate for a device driver or a web server with an SQL back end.

Page:

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like