back to article The Node Ahead: JavaScript leaps from browser into future

Voxer is a walkie-talkie for the modern age, an iPhone app that lets you instantly talk across the interwebs and listen at the same time and leave voice messages if no one is on the other end and simultaneously chat with multiple people and toggle between text and voice to your heart's content. It's a real-time internet app in …

COMMENTS

This topic is closed for new posts.

Page:

  1. Anonymous Coward
    Pint

    Very interesting...

    But it sounds like they are painting a limitation as an advantage. I've done event driven, and I've done multi-threaded, and I have to say multi-threaded is usually just simpler to develop. It also scales across cores.

    I'd be more interested in seeing client-side Mono than server-side JS, but that isn't to say that there isn't room for both. With Canvas, Web Sockets, and WebGl, I am beginning to question weather it is worth *not* writing your next app in JS.

    But then again, when I look at JS, I think, "oh god, back to the VB6 days". Yea for code that runs in the browser, but boo for stepping back 10 years in tools and libraries.

    1. Eddie Edwards
      Headmaster

      Not really

      Event-driven also scales across cores. And the price you pay for your "easy" multithreaded development is a profusion of thread context switches and thread stack allocations which kill performance once you reach thousands of instantiations (especially on Windows). The first thing you do to optimize your MT app is to write or import a thread pool, to avoid thread stack allocations. Then you reduce the thread pool to about the size of your core count, to avoid cache thrashing. Then you map your code onto that limited number of threads and hey presto! You're event-driven.

      The advantages of event-driven are outlined in some detail in the article. You just can't handle that many connections without writing event-driven code. Sure, you can develop something multithreaded very quickly, but it won't scale until you do the inevitable refactor to event-driven. I make a significant portion of my income doing refactors like that for people so please, go right ahead and write simple multithreaded code :)

      Having said that, straight-line code is *so* much easier to write than event-driven that it's a wonder we don't see languages that transform one to the other. Although any language with closures comes close (e.g. JavaScript).

  2. Stevie
    Badgers

    Bah!

    "Event-driven"?

    When we used to ask our Cobol programs to do several things at once and not wait for each to finish first, we called it "Asynchronous Processing", or "Asynch Mode".

    1. Anonymous Coward
      Anonymous Coward

      Yes, reinventing the wheel yet again

      In Symbian it's called active objects and it's standard practice. Going further back in time, anybody remembers HyperCard and HyperTalk? And I'm sure there are many more examples of "prior art."

  3. Anonymous Coward
    Flame

    Absolute Google-sponsored bollocks

    Script kiddies taking over the world. It can only end in disaster. C++ too rigid? Only if you don't know how to use it. Unless you have done template meta-programming in anger, don't talk to me about its limitations. It's difficult, yes, but if you want the maximum performance, then that's what you need. C++ provides the building blocks to do absolutely anything your heart desires, if you're willing to put the effort.

    1. David Dawson
      Grenade

      Can I just say

      This is hilarious.

      Difficult == best

      This is a fallacy. Come up with better logic or I simply won't believe you.

      Javascript is an unusual language, to be sure, being prototype based rather than anything more common, but its perfectly usable, and very powerful, when you know how to use it.

      I've found that the most productive programmers are the ones who are _prevented_ from making common mistakes by their environment. Some systems demand that you need to manage memory explicitly, most don't; some systems demand that you need to manage threads explicitly, I would argue that, again, most don't (PHP, ruby, java web etc don't give you ready access to threads).

      C++ is not a silver bullet. Its not even close.

      Fwiw, I use java related tech mostly in my commercial work. And that isn't, either.

      There is no silver bullet.

      1. Anonymous Coward
        Boffin

        Straw man

        I didn't say that C++ is a silver bullet. I didn't even say that it's the "best" because that depends on the goals. If the goal is maximum performance per processor clock cycle then C++ is probably the best (well if you had an humongous brain size maybe assembly could beat it, but it's not practical). Unlike assembly, though, C++ can be expanded almost ad infinitum. How different to program C++ with STL/Boost than naked C++, for example. And nothing stops you from creating your own template library (remember ATL for example, it almost made COM usable). Funny how with the carbon emissions and the electricity costs minimizing clock cycles might be fashionable again...

        1. Jean-Luc
          Thumb Down

          I think you are missing the point

          C++ is fast, yes. But often program speed is more driven by the algos used than by the language used.

          If you take the example of BitTorrent, this is code that is optimized for downloading files. Yet, it is written in my preferred language, Python, which usually distinguishes itself by coming in dead last in performance tests.

          It's just that what the coder of BitTorrent did was clever - he realized download time was NOT CPU-bound which is where C++ rules. It was network-IO bound, so it was better to optimize WHAT you would be doing (chunking across peers) rather than HOW (which language you are doing it in).

          This is what Node seems to be doing, by going after blocking I/O, but at a generic / platform level. Clever. The one worry I would have, but I haven't coded that much in Javascript is that I think server code needs explicit errors and exceptions (which you then manage), not silent fails like browser code.

          Last keep in mind that languages like Python shell out a lot of their high-cost computations (around data structures like hashmaps and lists) to... heavily optimized C libraries. So a clever coder in Python can sometime get a fast program by using an elegant design that leverages already-written C code. I doubt JS is any different.

          There'll be plenty of jobs left for yah C++ jocks, fear none.

          1. Anonymous Coward
            Anonymous Coward

            Which is also why ...

            ... you use PHP, Perl, Python et al on the web.

            PHP will not be your bottleneck. Having to build your apps with C++ will most certainly be a bottleneck in your production line and the reason why your company fails.

            The stupid, stupid hyperbole that you can "do absolutely anything your heart desires" in C++ is as true for most other languages. You just wouldn't. Even pissant script kiddies know that.

            1. bazza Silver badge

              PHP will not be your bottleneck?

              Really? Ask Facebook about that one...

              1. Anonymous Coward
                Anonymous Coward

                I'll see your Facebook and raise you a Wikipedia and a Flickr.

                I see you're from the Idiotard School of Generalising from the Particular. Don't they teach any sci in comp sci?

                Perhaps. Just perhaps. There are other factors at play. Perhaps Facebook was just badly coded crap, quite possibly by a bunch of C++ devs who think they know web. I mean, four stylesheets in the head alone.

        2. Eddie Edwards
          Troll

          Well, actually

          Actually, assembly *is* practical for the infinitesimal part of your average codebase that needs to occupy the ALUs flat out. Carmack demonstrated this in 1994.

          For the rest, it really doesn't matter all that much what you use. The cost of a single cache miss on a modern architecture covers 100s of ALU operations. The fact that you're only occupying 20 of those with your C++ code instead of 40 with your JIT JS code is of marginal importance (you *may* use less energy).

          There are of course many applications where a lot of ALU code must run flat out without accessing any memory at all. But GPUs win at that stuff so C++ isn't even a contender.

          Once you slice away the parts of your code that would be best written another way, very little remains for C++ to do. To be honest, outside the games industry (and I mean the hardcore PS3 stuff, not the Zynga stuff) I'm surprised anyone still uses C++. I guess it's entrenched.

          But I do agree that C++ plus STL/Boost is almost a reasonable language to program in, provided you have a farm of machines running Incredibuild.

          1. Aeternum
            Thumb Down

            C++ still has a place

            I was reading this debate with interest, but I have to step in here.

            Saying C++ has no purpose outside the games industry is just wrong. How about real time technology? How are you going to get interrupts with microsecond accuracy in JavaScript? You're not. Memory intensive applications where you need to control your memory very closely? Interfacing with hardware in high performance scenarios?

            1. Jean-Luc
              Thumb Up

              @C++ still has a place

              Totally with you on this one. I never did manage to do C++ properly and I respect those who can make it dance. I certainly wouldn't want to open up 100MB text files with a text editor written in Python or Javascript.

              People need to use the best tool for the problem at hand. Sometimes that means handing the job off to a _proper_ C++ coder because it needs C++. Sometimes it means that C++'s cost in terms of time-to-code and paying for a "real programmer" is not justified. Sometimes it means that C++ should not even be considered at all.

              A clever architecture might consist of mostly script language code with critical application bottlenecks implemented in C libraries.

              Also, I don't get all the "script kiddies" remarks in these comments (not yours). Choice of language does not decide whether you are competent or not at programming. Good coders usually know a number of languages - I certainly get C++ syntax, though I am aware it would take me time to be proficient at coding it. I found Javascript to be very interesting when I looked at it and if you look at stuff like jQuery, you see it can be used to do rather impressive stuff - hardly script kiddy territory.

      2. bazza Silver badge
        WTF?

        It's not a fallacy

        It's not a fallacy. The only website operator who says that is the one whose site isn't sufficiently popular to force the business owners to properly confront the issues of scaling up. Your comments suggest they you've not yet reached that stage.

        Sometimes difficult is best. Lets look at some case history:

        Facebook reached a business terminating scale-out problem. They just couldn't make their site any bigger because they'd reached the practical limits of PHP, the platform on which they'd based their site. Solution? Write a compiler to turn PHP in to a compiled, not interpretted language. That can only be described as a massive bodge at best.

        Twitter had a similar problem. They reached the limits of what Ruby could do, and plumped for SCALA (a Java-ish thing) instead. That's a little more intelligent than Facebook's solution, but it's still a kludge.

        In both cases their early days were driven by the need to make rapid and effective changes to the workings of the website. This was important because to grow the business they had to make quick improvements, otherwise the websites would have perished at the outset. PHP, Ruby, etc. are quite suitable for that. But none of that early development ever factored in the possibility that their websites might grow to world dominating sizes. BIIIIIIG mistake. Just think how much better off they would have been if they had tackled the scale problem at an earlier stage. As it is it's effectively too late for both of them to re-implement properly.

        A good example is Google (who really do know about scaling problems). They apparently have a performance metric of searches-per-Watt, because that is the prime cost driver of their business. Just imagine how much money they might save if they could improve that ratio by just a few percent! And indeed, Google's search infrastructure and website code is said to be very, very different to that of any other website's. I gather that Amazon's site has also made use of C++.

        As they grow all website based business will ultimately run in to power and size problems. It's not unusual for $0.5M of server kit to cost $1.0M to run per annum. Keep doubling that every year to keep up with growth in the business and it won't be too long before the owners wished they'd spent a couple of million on good C++ engineers a few years back. C/C++ might be hard to implement in, but you do end up with the ultimate in runtime efficiency if it's done properly. Do the workload in the fewest possible CPU clock cycles and you use the least amount of power possible.

        It is difficult though. Identifying the point in time in the evolution of a business when it is right to tackle the scale up problem properly is a risky thing. And the rate at which a business (e.g. Facebook) can grow means that the optimum window of opportunity is probably only a couple of months wide. Miss it, and the consequences could prove very limiting indeed.

        So this new thing might be a straight forward lurch towards better efficiency for those not prepared to learn a proper performance engineering language like C++. It's use of Javascript condemns it to consuming CPU cycles that more rigid languages like C/C++ don't hide from you. As other commentators have said this is just some young kids reinventing the wheel and hiding the underlying complexity and its oh-so-important inefficiencies from the unknowing and unthinking developer.

        1. David Dawson

          This is still funny.

          It is certainly a logical fallacy.

          No proof has been given that because something is difficult it is necessarily the correct course of action. And no proof can ever be given, because its an argument without merit or reason.

          Logically prove to me that the best course of action is always the hardest and I'll donate my organs to charity here and now.

          I'm not arguing against the merits of C++, which I think is a worthy language that has its place, rather the blind assumption that its the best language for that ill defined thing - "performance".

          Your examples aren't particularly compelling. Facebook did indeed compile PHP. but most of the PHP execution time is in C modules anyway, so they are optimising whats left. True, this emotionally feels imperfect, but at 500 million users, it seems to be holding its own....

          Twitter hit a big wall in their custom message processing engine they'd written in ruby. They went for scala as a replacement. This is a relatively new, functional, language that runs on the JVM. It has a very well developed actor framework and other multithreading capabilities that make it very good for writing message/ event based systems (strangely, some similar concepts as in Node.js). Its been found to be good for message processing, as given its functional nature, messages can be immutable. Code will then not need any memory sychronisation to manage shared state between threads, as there is no possibility of corruption.

          The JVM platform is fast, memory hungry and robust.

          It is an appropriate language/ platform for what they wanted.

          Given 2 theoretical architectures, one scala/ immutable message based, and the other C++ using shared memory with mutexes, semaphore and what not.

          The scala one will completely wipe the floor with the C++ one no matter how clever you are with STL metaprogramming, because memory synchronisation is expensive in any language.

          The correct architectural/ algorithmic decision here will totally rule which solution wins, not the language per se.

          Some things need hyper efficient code that keeps the power usage down; but then, why not use C? or assembly? Heck, use C and GPU/ CUDA or something else to make your system scream? Why the obsession with C++?

          1. bazza Silver badge

            @David Dawson, This is still funny

            "Logically prove to me that the best course of action is always the hardest and I'll donate my organs to charity here and now."

            Can't be done. You only discover the need for efficiency when the biggest cost becomes power consumption. Power consumption growth tends to have a nasty way of becoming unsustainable and unexpandable as things grow. For example, what if your next round of upgrades is going to mean laying in a £20M power cable to run an extra £1M of servers? So it's something you will have to discover for yourself. Of course, if a business doesn't grow to such sizes then power consumption may never become the biggest issue and the pressure to do things the hard way never builds. Facebook, Google and Twitter all did grow massively, and power is certainly their biggest issue. Google now build their own power stations and sell to electricity consumers when the world is asleep and not doing so many searches!

            "Your examples aren't particularly compelling. Facebook did indeed compile PHP. but most of the PHP execution time is in C modules anyway, so they are optimising whats left. True, this emotionally feels imperfect, but at 500 million users, it seems to be holding its own...."

            So if PHP is mostly executing in C libraries, why did the feel the need to compile it in the first place? Whatever inefficiencies they had resided in their source code, not in the library routines they were using. They've saved a little bit by elimintating the run time interpreter, but those inefficiencies are still there in the source code. It is holding together, but at what cost to their prfit margin? Their server farm costs must be tens of millions a year, and even a small saving would likely easily pay for a re-write in a more run time efficient language.

            "Given 2 theoretical architectures, one scala/ immutable message based, and the other C++ using shared memory with mutexes, semaphore and what not."

            Who said anything about shared memory and mutexes? I've been using OS IPC primitives such as pipes for message passing between threads for twenty years. In modern unixes, this sort of pipe:

            #include "unistd.h"

            int pipe (int filedes[2]);

            and Windows

            CreatePipe();

            People need to read library references more. Fast, very scalable (on unixes and windows they're effectively interchangeable with sockets), very easy, quite well suited to modern CPU architectures that use high speed serial links between CPU cores. This is message passing done at the lowest possible level with the least possible impediment to performance.

            The idea in one form or another has been around since 1978 and clever people have been programming that way for many decades now:

            http://en.wikipedia.org/wiki/Communicating_sequential_processes

            Ah, the happy days of Transputers!

            "The correct architectural/ algorithmic decision here will totally rule which solution wins, not the language per se."

            Yes, designing for scalability is often important, and message passing between threads and processes is a good way to scale. Most people run away from that sort of architecture to begin with, but are forced in to it sooner or later. But that doesn't dictate language choice. What does dictate language choice is the presures on the bussiness. Up scaling eventually means power consumption becomes the single most costly thing, so C++ or something like it starts looking attractive (if painful). Not up scaling gives one the luxury to indulge using in an easier language. Scala and Node.JS might make using the right sort of architecture easier, but they can't be as runtime efficient as a compiled native application with minimal runtime environment between app and cpu.

            "Some things need hyper efficient code that keeps the power usage down; but then, why not use C? or assembly? Heck, use C and GPU/ CUDA or something else to make your system scream? Why the obsession with C++?"

            One consequence of message passing in C++ is that really you don't need C++. I tend to use C, and consider threads to be "opaque objects" (I'm not using shared or global memory) with "public interfaces" (I talk to them only through their pipes / sockets). All good object orientated paradigms.

            GPUs and CUDA/OpenCL are moderately interesting, but modern GPUs are too powerful. They're fine if you need only one or two because then that's one PC and you can keep them busy. As soon as you need more than can fit in a single PC you're in trouble, because you can't keep them fed by sending data over a LAN; you need something more exotic.

            In the branch of the large scale embedded systems world I occupy PowerPC (of all things) is still just about the best thing because of the GFLOPS / cubic foot that you can achieve. Intel aren't far behind and may be on a par with PowerPC now. As I said earlier GPUs aren't bad but you can't efficiently bolt together more than two of them. They're OK if your application is such that you can send them a small amount of data and let them chew on it for a lengthy period of time because then you can match the interconnect performance to the compute power of a GPU. In my domain the interconnect performance is as important as the compute power, and PowerPC (8641D) is very good at interconnect.

            Interestingly enough I'm beginning to hear of data centre operators making enquiries about this sort of kit because of the size/power/performance ratios that can be achieved.

        2. Anonymous Coward
          Anonymous Coward

          Come on aboard, I promise you you won't hurt the horse.

          "It's not a fallacy. The only website operator who says that is the one whose site isn't sufficiently popular to force the business owners to properly confront the issues of scaling up. Your comments suggest they you've not yet reached that stage."

          Again. Big fucking websites out there that don't run C or C++. Your comments suggest you haven't a fucking clue, but have merely jumped on the "Language X doesn't scale" bandwagon.

          Perhaps if Facebook didn't require quite so many requests per page, they wouldn't have had to fuck around like they did.

          "Sometimes difficult is best."

          So, let's write all websites directly to the chip. Much faster than C++.

          "Lets look at some case history:"

          Yes. Lets. The kernel team think C++ is too slow for the kernel. They did back in they day and they do now. And they still think C++ compilers make too many stupid decisions. Of course, maybe that will change. Maybe the compilers will get better. But maybe D, or E, or whatever else might happen. C++ isn't teh bestzors.

          Perhaps you just haven't written anything popular enough yet.

    2. David Pickering
      Thumb Up

      thumbs up

      well said!

  4. Grendel
    Thumb Up

    Node really does deliver!

    We are using Node.js to deliver real-time web-based resource tracking (vehicle tracking, asset tracking, staff tracking) and mapping solutions to tens of customers with thousands of resources and millions of resource-movements per day on our SaaS service called 'Xlocate' over at www.xlocate.net

    We use Node to provide all of the real-time communications between a range of radio and GSM based tracking devices, MySQL databases and client machines usingusing a web-browser and HTML5+WebSockets.

    The architecture is real-time being almost entirely event-driven and the applications are developed in Django using model-view-controller and Javascript in the client and comms servers (Node.js).

    Our solution is implemented with Dell R210 application servers at the front-ends and medium performance Dell R410/R710 servers for the comms and database have bench-marked our system at over 6000 transactions per second (TPS)... (as long as our clients use Chrome! and not IE9 or FF3.5) ...

    We like the event-driven nature of the system, ease of coding/prototyping/test harness building, outright performance and especially the ability to move modules of code between the back-end servers (Node.js) and the client (browser) as the solution develops.

    Node.js + V8 really rocks and was a great find for us!

    1. Anonymous Coward
      Flame

      Infomercial-level accuracy

      An architecture is NOT real-time unless an specified maximum response time is guaranteed. Difficult to believe that JavaScript can do that... Now you can say that for your application you don't need real-time guarantees, but then don't claim it's real-time; the Advertising Standards Agency might be interested.

      1. biznuge
        Badgers

        hunh...?

        Surely the real-world application of an application (hunh) would determine its own relative measure of real-time. I'm fairly sure that I wouldn't be comfortable with my local nuclear reactor running on asynchronous javascript, but for tracking a parcel, it would seem to be as "real-time" as any customer might require...

  5. David Dawson
    Megaphone

    Facebook wrote their own jabber server

    No, they used/ modified (I'm sure heavily) ejabberd.

    Event driver architectures are certainly becoming much more prevalant in my field of development.

    For something as simple as a website, being able to not block on a db access is a dream for scalability.

    1. Anonymous Coward
      Flame

      Nothing new

      Real developers have been doing that for decades. It's all those dreadful libraries designed for script kiddies that prevent non-blocking DB access. It seems that finally the script kiddie world has done a bit of catch-up and "it's like OMG wow, have you seen that..."

      1. bazza Silver badge

        Old hat indeed

        What's scary is that script kiddies of today will one day become old farts like us, and there will be a whole new generation making exactly the same mistakes yet again. The benefit of being an old fart is that at least we can say we invented these things, and everyone else merely got round to reading the bloody text book.

        University educators really need to get a grip on what it is they teach. They churn out hoards of young programmers ill-equipped to deal with the real world of power consumption, scale and performance. Until the lecturers recognise that, it'll keep happening.

  6. Jolyon Smith
    Boffin

    Um, "event driven" requires threads under the hood

    On current software execution platforms there simply is no other way afaik.

    i.e. event driven isn't something "other" to multi-threading, it's dependent upon it! Unless you are prepared to put up with a single threaded event pump, but that's not going to scale at all well.

    There are any number of software patterns available to simplify multi-threaded programming, the Active Object pattern is the most successful that I have encountered, but it's a pattern, not a technology.

    Active Object may have an implementation on Symbian but I also developed a real-time hardware control platform for an automated drug dispense testing rig, using Delphi running on Windows. The core AO framework we developed for that project was general purpose and could easily be used in other projects (I have one coming up right now in fact).

    1. SilentLennie

      Just eventloop

      Hi Jolyon,

      It just used the single threaded eventloop and a threadpool for the few Operating System API's which don't support proper async use.

      Here is the long story:

      http://developer.yahoo.com/yui/theater/video.php?v=dahl-node

    2. Hayden Clark Silver badge
      Happy

      Windows 3.1

      Apps are event driven, but there really is only one thread in the system. If your app gets stuck, it's Control-Alt-Del time.

      I've written a few single-threaded event-driven systems in my time, as well. It's the only way on a Z80 :-)

  7. c 1
    FAIL

    "c++ to rigid" - WTF?

    What drugs are these people on? There is pretty much nothing that C++ cant do. Sure - you may shoot youself in the head a few times along the way but "rigid" it 'aint.

    1. asdf
      Flame

      yep plenty of ways to hang yourself

      Easy access to pointer arithmetic, check, multiple inheritance, check, many many compiler specific behaviors in undefined situations check, ability to bust a nut past end of your arrays or buffer, check. Yep C++ isn't for the hippie long hair hack vbscript web devs that think they are real programmers.

      1. bobbles31

        Off Topic

        One of my favourite quotes is (and I can't remember who said it)

        "Giving Multi-threading to VB programmers is like giving Razor Blades to babies.....of course C++ programmers are not better, they are just more used to blood."

        My 2 cents

        Horses for courses pick the language that is right for your circumstances, and develop for today, you could invest millions pre-empting being the next Facebook, but you will never ship before the money runs out. If it turns out that you are going to be as big as Facebook , the Markets will have realised also and money will not be a problem.

        Facebook may well have run into the "Limits of PHP" but they were only able to do that because PHP was there to allow them to develop quickly.

  8. Paul Ireland
    Thumb Up

    Another server-side JavaScript environment

    I like JavaScript, and I welcome another server-side JavaScript environment.

    But Node is not the first server-side JavaScript environment.

    Anyone remember ASP? Active Server Pages or classic ASP, when it started around 1998 used Active Scripting languages, including VBScript and JScript. So it was possible way back then to write your backend functionality in JavaScript. I think it is still possible today with ASP.NET too, using the .NET language JScript.NET

    1. SilentLennie

      Netscape

      Obviously Netscape was there first. They created a server which had JavaScript at it's core.

      Technically, Microsoft never had a JavaScript engine, they had JScript, which is a bug-for-bug-compatible implementation of JavaScript or something like that. ;-).

    2. CD001

      ECMAScript

      JScript and JavaScript are both implementations of ECMAScript so - whilst JScript !== JavaScript, it's near as damnit. I always hated ASP though, nothing to do with ASP per se but the way Microsoft "sold it" - it's got what a dozen actual commands of its own so, in reality, it's just another CGI and yet MS always punted it otherwise ... you'd be just as well off using Perl or PHP through CGI on Apache IMO.

      I've been hacking about in "Curly Brace" languages (C++, Perl, Java, PHP and JavaScript) for (far too many) years ... is it wrong to have gained a sort of grudging respect, even fondness for JavaScript? I actually quite like it as a fast (to deploy) semi-OOP type language; want to add a new functionality to an array? Array.prototype.newFunction = function() { } and so on...

      If you're used to say Java or C++, JavaScript feels like a KFC Wicked Zinger meal - it's so very, very wrong and yet somehow actually rather nice ;)

  9. Bob H
    Boffin

    food for thought.

    The trick I think a few people have missed here is that you have an interpreted language but with the optimisations of a commercial engine being run on millions of instances (V8). Look at what is being dine to improve JS performance in the name of browser wars! This isn't simply about making commercially sensible decisions for the design of a language, this is about having a platform you can interface with millions that can be written by thousands and deployed anywhere.

    As was said ASM is faster, C++ is more powerful, but are they meeting the commercial needs of those companies and their deployment models. There are no new stories, no new songs, yet we still move along.

  10. Wile E. Veteran
    Thumb Down

    Nothing new here, move along

    People have been doing event-driven programming as long as computers have been around. Just what do you think implementing a Finite State Machine is?

    Telco's have been big users of FSM's and event-driven programming ever since they switched from discrete-transistor equivalents of electro-mechanical relays to computers.

    I even wrote an event-driven (FSM) system to test auto parts in 8080 Assembler circa 1980. You could use C, C++, or even Ada if you wanted to.

    Nothing new here except forcing the use of JavaScript..

    Big whoop.

    1. DarkStranger
      Thumb Up

      Still it's cool...

      @Wile E. Veteran

      Being an embedded systems guy that loved to build hard real-time, deterministic systems before I joined the web app development world I have to agree with you that the event driven / interrupt driven model has been in use for a long time. Nearly every rtos based or bare metal FSM embedded system uses an interrupt driven (callback) architecture to respond to asynchronous events. Internet routers and wireless base stations are a great example.

      But it's cool to see the young guy's rediscover old methods and apply them to newer technologies.

    2. Anonymous Coward
      Anonymous Coward

      Waxing the Vaxen

      Nothing new here, blah, blah, blah. Yes, reinventing the wheel yet again, blah, blah, blah.

      We've been doing it for tens of thousands of years. It's what humans do. Every generation builds off the work of the previous, refining it and retooling it. And FSMs go back to the 19th Century, if not earlier, so big whoops to you.

      "We are like dwarfs on the shoulders of giants [...]"

      Asshole programmers could learn a little humility. Nothing you've ever done was new either.

      1. Tom 7

        Dwarfs standing next to dwarfs

        John - reinventing the wheel is just that: its not learning from those before.

        Its only recently we've got into reinventing the wheel this way - nowadays three sided wheels are seen as an improvement on 4 sided ones - one less bump!

        1. Anonymous Coward
          Anonymous Coward

          You appear to have misunderstood the phrase.

          Reinventing the wheel indicates an unnecessary re-engineering, not a failure to learn from prior engineering. But moving on.

          So ... features shouldn't be built into a language because they already exist in other languages. Because a feature exists in a lower-level language it should not be implemented in a higher level language.

          Because C++ does something or other, Javascript shouldn't do something or other.

          Would that be the thrust of your argument?

    3. Tom 7

      Big whoop?

      I have to agree Wile E, Been doing this in javascript since frames appeared - this stuff only appears to be novel to the new generation of programmers who learn by their mistakes.

  11. Mark Pawelek

    Single threaded event queue servers + html5 WebSockets rule!

    We don't need no stinking threads to do multi-tasking. We have Node.js running in only one thread.

    All these guys slagging off JavaScript need to wake up and start living in today's world. JavaScript is Turing complete and can be compiled rather than merely interpreted. The limitation to 1 thread is an advantage because it forces multitasking via an asynch event queue + callbacks; thereby saving masses of RAM and letting us scale to millions of 'processes' with ease.

    JavaScript can do much of what Ruby and Python do. There are ways to overcome the natural lack of namespaces and the unfortunate default of global variables. Use objects to enforce namespacing and make variables local to functions - which are, of course, themseleves objects. I'm recommending Douglas Crockford's 'JavaScript - the Good Parts'.

    Html5 WebSockets rule especially when I can simulate VoIP with them! Html5 WebSockets will revolutionise the web more so than Ajax. We will, at last, gain the illusion of client/server apps running in real time - on a single-threaded server!

    We have the technology now. We just need to write the apps.

    PS

    'Event driven' is not dependent on multi-threading. We are not simplifying multi-threading here. We are abolishing it.

    1. Tim Parker

      @Mark Pawelek

      "We don't need no stinking threads to do multi-tasking. We have Node.js running in only one thread."

      ..you need multiple execution streams of some sort for useful multi-tasking, even if it's just the

      data + instruction block for the CPU - the fact that the 'scheduler' (The Node application in this case) is running on a single thread doesn't alter that, regardless of whether the execution streams are asynchronous or not.

      "All these guys slagging off JavaScript need to wake up and start living in today's world. JavaScript is Turing complete.."

      Great - so you have a conditional, some sort of jump instruction and can write to a memory address - please excuse me whilst I put the bunting up.

      "Html5 WebSockets rule "

      I think that sets the intellectual and technical level quite nicely for the next bit...

      "especially when I can simulate VoIP with them!"

      (..rummages for more bunting and perhaps a couple of flags..)

      "Html5 WebSockets will revolutionise the web more so than Ajax."

      My cat could probably 'revolutionise the web more so than Ajax'.

      And I don't even have one.

      "We have the technology now."

      The technology is as old as the hills - this has been gone over by many folk already - the packaging into a single stack is convenient though.

      "We just need to write the apps."

      That would probably be a good idea.

      "PS Event driven' is not dependent on multi-threading."

      I don't think anybody really thought it was.

      "We are not simplifying multi-threading here. We are abolishing it."

      You may be choosing not to use it for this framework (and quite a nice framework it is too IMO) but if you somehow think this is new and edgy then, as has already been mentioned several times, think again. Is it useful ? Undoubtedly. Is is revolutionary ? Yeah... right.

      1. bazza Silver badge
        Thumb Up

        @ Mark Pawelek

        You're a fine example of why websites that grow start costing too much to run.

    2. Anonymous Coward
      Megaphone

      HTML5

      'We have the technology now. We just need to write the apps.'

      ...AND NEED EVERYONE TO UPGRADE THEIR BROWSER AND NEED THE WEB SOCKETS SECURITY FLAWS FIXED

      by which time web-sites will be displayed in 3D.

    3. CD001

      I may be wrong...

      AJaX revolutionised exactly nothing - it was always possible to generate JavaScript from server code, just print() it out (messy but doable). All it did was copy the way that ActionScript/Flash could load in XML files and interpret them (the Asynchronous bit)... until they realised that it would be better to just have the server code generate JavaScript anyway (like we were doing before) and codified it into JSON so we're now really using AJaJ.

      Seriously, there is nothing special about the web, it's basically old-skool mainframe problems being rediscovered by the Microsoft generation. We've "had the technology" for probably 40 years.

  12. Not That Andrew
    WTF?

    Hmm...

    IMHO, the issue here is the developer, not the languages. He would seem more interested in chasing the latest fads and scratching his own itches than developing a usable application. That a usable application has been delivered before he hopped on the next trend and started again from scratch is a happy accident. However, I'll be very surprised if the next version of this application is not written in yet another buzzword-compliant, trend of the week technology.

  13. Anonymous Coward
    Anonymous Coward

    Hmmm....

    I see fear in the comments. The fear of those who love complexity for the sake of complexity being astonished that someone has seen the need to simplify the web-application development process, and that the person who did it was not part of their clique. They attack. Ridicule and mockery are their weapons. The web-application development (and I hate to use this word) "paradigm" will change, and they fear being left behind with skills that may rapidly become obsolete.

    Well folks this is the world of software development, change is what happens here. From top-down design structured to Object Oriented. I've forgotten more than I now know, and with good reason, the game is always changing and we have to try to keep up, either that or become unmarketable.

    One thing that rarely changes in this business though; reducing application complexity and increasing application development efficiency and maintainability is good. Doing these things with open tools makes it even better.

    1. CD001

      yes and no

      ----

      One thing that rarely changes in this business though; reducing application complexity and increasing application development efficiency and maintainability is good.

      ----

      All good - but the problem is that sometimes those things come at the expense of scalability. Adding a framework (or even using a GC interpreted language) can make the application quicker to develop yes but it could come at the cost of the actual operating efficiency of the program. In 99.9% of cases that's not an issue - but if you start to get to Google or Facebook size it can bite you on the arse.

      Personally I think it's generally good to start off with something that can be rapidly developed - if it takes too long to develop it's obsolete before it's complete. You just need to be aware of the potential arse-biting down the road ;)

      It's just a matter of WHERE you most need the efficiency ultimately.

  14. This post has been deleted by its author

Page:

This topic is closed for new posts.

Other stories you might like