back to article Programming languages in economics: Cool research, bro, but what about, er, economics?

Ignoring the central insight and purpose of economics seems odd in an economics paper. You peeps here at El Reg are most unlikely to be regular readers of papers published by US body NBER, the National Bureau for Economic Research. So I've brought you one we can all puzzle over together (PDF here), "A Comparison of Programming …

  1. ElReg!comments!Pierre

    > If you were trying to make a machine that goes “Ping!” to alert the CEO's secretary that he's got a new email, then any old bodge job will do, even Java possibly.

    Yeah, no. That's exactly the opposite actually. If you're trying to make a very complicated and specialized piece of software likely to run on a dedicated machine you'll want to write it in the language you're familiar with, because you're infinitely less likely to screw it up than if you were trying to shoehorn your mental mechanisms into another language. Then if it's really, really, really impossible to run that on any machine that you can get, you'll magic in into a piece of C (there are tools to do that for most high-level languages).

    On the contrary, when you're designing a tiny little thing that is both utterly unimportant AND will run atop of gazillions of other tiny unimportant things (such as the ping machine you described), it is of outsanding importance that you make the effort to write it as tightly and as efficiently as humanly possible, because if running the ping machine together with the secretary's minesweeper crashes her machine, you and several other people are in the shit. And even if it wasn't the CEO secretary, writing things badly "just because it's small anyway" is the stupidest thing ever, because in real life piling small crap upon small crap is what makes you run out of ressource for the important things.

    The important things being less than optimal (as long as they can be proved to come up with the right result) is less of a burden, because they're important, so subsequent people in the loop will either throw more ressources at them or optimize them (there are plenty of automated tools to do that); no-one's going to do anything like that with the small things, and they'll clutter systems 'till the end of times. And that's bad, m'kay?

    1. Tom 38

      Is that your understanding from a lifetime of dealing with software provided by hardware manufacturers, that they spend a lot of time and effort making it work efficiently?

      Mine is that they get it right enough to put it in a shiny box and sell it. If it sells well and there are software bugs, they can fix them in rev 2, if they care enough.

      1. ElReg!comments!Pierre

        Re Tom 38

        "Is that your understanding from a lifetime of dealing with software provided by hardware manufacturers, that they spend a lot of time and effort making it work efficiently?"

        Yeah, no, of course not; but in my view that's an illustration of my point: they pile up inefficient bloatware upon inefficient bloatware just because every individual part is "so small it doesn't matter".

        1. Tom 38

          Re: Re Tom 38

          The dichotomy is that you are trying to state what is required to produce a high quality product, but manufacturers mainly produce what most consumers want, which is cheap products.

          In general as consumers, we are far happier to pay less for a lesser, buggier experience than we are to pay more for a premium, high quality experience.

          1. ElReg!comments!Pierre
            Linux

            Re: Re Tom 38

            "In general as consumers, we are far happier to pay less for a lesser, buggier experience than we are to pay more for a premium, high quality experience."

            Or we could just use Debian ;-)

            1. Kepler
              Paris Hilton

              Re: Re Tom 38

              "Or we could just use Debian ;-)

              A Linux newsgroup thread I encountered some 10 or 15 years ago was entitled "Debian Does Dallas."

          2. P. Lee

            Re: Re Tom 38

            >In general as consumers, we are far happier to pay less for a lesser, buggier experience than we are to pay more for a premium, high quality experience.

            To an extent that's true, but there is also problem that consumers have been burnt before with high-priced "premium" products which aren't.

            Its a question of scale. For high-volume production, individual customers don't matter, they aren't catered for.

            I'd love to see a study on how production efficiency scales. I have a sneaky suspicion that you don't get better economies of scale of production once you go larger than one factory. The rest of the scale benefits come from more marketing, better lawyers, being more important to local government etc.

    2. ElReg!comments!Pierre

      PS

      Forgot to mention of course that it makes the rest of the article even "truer" and that I agree with you in the general message; just not on that "small vs big" repartition of tightness efficiency, so to speak.

    3. ratfox

      Well, that's just, like, your opinion, man.

      If I had to code that ping thing, I'd hack it in a scripting language like python, for minimum amount of coding time and risk. If you manage to write a ping program that crashes the machine, you have no business being a programmer anyway.

      Now, trying to write a ping program in C so that it is the most efficient possible sounds to me like a waste of engineering time, make it more work to maintain, and in the end you are if anything more likely to have a memory leak.

      1. ElReg!comments!Pierre

        "Well, that's just, like, your opinion, man."

        Of course.

        "If you manage to write a ping program that crashes the machine, you have no business being a programmer anyway."

        In a world of unlimited ressource that's true; however when you use up the tiny amount of remaining system ressources with a ping machine, the system can become very unstable, and not necessarily because of _your_ code. My point was more that small shit like that is supposed to be run together with a lot of other similar small shit, not all of which will be well-coded; and that the "no matter if it's shit because it's small" mentality is not advisable.

      2. Kepler
        Pint

        "Well, that's just, like, your opinion, man."

        http://www.youtube.com/watch?v=pWdd6_ZxX8c

        The Dude abides!

        I don't know about you, but I take comfort in that.

  2. Charles Manning

    Well duh

    Engineering is the art of compromise.

    Different problems need to be tackled in different ways.

    If you're building millions of something then it pays to spend a 100k or more optimising the code so it can run on a 20cent CPU vs a 50cent CPU.

    If the code is going to be run once then speed of writing the code and being confident in the output is far more important.

    Hence actuaries doing one-off calculations us APL or matlab or such while embedded programmers use assembler, C or whatever.

    Call cost/benefit supply/demand analysis economics if you want, but to an engineer it is just normal decision making.

    1. tony2heads
      WTF?

      @Charles Manning

      APL!

      The one with the funny symbols?

      I haven't seen anyone use APL in decades; surely a scripting language with vector algebra (maybe python with Numpy or similar) would be easier

      Anyone out there: Do you use APL

      1. ElReg!comments!Pierre

        Re: @Charles Manning

        "Anyone out there: Do you use APL"

        For the works that were APL's strength we now use python (sometimes in association with R) here. In a neighbouring team they use SPSS for historical reasons (and because it's expensive and unpractical so it must be good, right?). In a former life I worked with physicists who used mostly MatLab (tried it, didn't like it: slow, bad ressource management, lacks flexibility; superb graphing capabilities though). It's been decades since I last saw APL mentionned, and it was on a handwritten presentation slide.

        1. Kepler
          Childcatcher

          What do physicists use?

          "In a former life I worked with physicists who used mostly MatLab . . . ."

          My brother-in-law is a physicist, and he still uses Fortran when he needs to crunch numbers. (Don't know which version.)

          1. Milo Tsukroff
            Coat

            Re: What do physicists use?

            I majored in Physics and loved Fortran. After working in accounting software for 35 years, I found that the best language for doing accounting is ... COBOL. Slow to code, runs okay, but when it absolutely positively has to work, or you have to be able to find the bug in a reasonable time, then there's nothing better. I've used or learned most everything else, from Assembler to C to Java etc. but COBOL is still the bee's knees when it comes to handling money. The economics of it makes sense: The financial industry just absolutely positively can not screw up their internal transactions. Unlike code like *cough* Windows *cough* which after running a really really long time (we're up to a week and half on the average now!) starts to corrupt internally.

            'Nuff said. Mine's the raincoat made from a garbage bag.

      2. Anonymous Coward
        Anonymous Coward

        Re: @Charles Manning

        I would use APL if it was nicely integrated with today's software environments. Modern languages suck at vector math. NumPy feels like the cumbersome Python bolt-on it is. I'll take the funny symbols any day.

        I wrote a little APL frontend to NumPy a few years ago, and someone made one for Javascript. But just setting up the keyboard layout is a chore, and there's a lot more to it implementation-wise, and I've got a million other things to do...

      3. Munchausen's proxy
        Pint

        Re: @Charles Manning

        I use APL (actually A+) as my desk calculator app.

        http://www.aplusdev.org, and available as linux packages

      4. Anonymous Coward
        Anonymous Coward

        Re: @Charles Manning

        I have used APL, and occassionally fire up DYALOG for trivial home use on rainy weekends. It seems to be vary rare in the wild though. Although if you search youtube can find some APL conference videos.

        Also, J maybe of interest. I prefer it to APL because it doesn't have the "funny symbols", but all verbs / functions are written as ASCII characters. I use it for personal work. The reason I love array languages is they make programming fun.

        e.g. find all primes less than 1000.

        Answer: ((1 p:) # ]) @ i. 1000

        Cool, or what? Hmmm... maybe you need to be of a certain mindset to find the above cool.

        Also in Finance, in conjunction with the kdb database, the Q language is often used (although I hear that most finance devs do their code in C++ / Java since kdb has interface for them,)

    2. Anonymous Coward
      Anonymous Coward

      I miss APL

      My first programming language back in the early 70's. My high school had two terminals (with the cute IBM type ball - I still have a APL type ball somewhere) to the local college.

      I wrote the Game of Life in three lines, and I may have been able to do it in two but I think I ran into a line length limit.

      Those were the good old days.

      1. Kepler
        Gimp

        APL

        Wasn't Bill Gates a bit peeved with Rod Brock at one point for getting Paul Allen interested in APL, which Gates thought was a distraction for Allen and a waste of his time?

        (If it wasn't Rod Brock, then it must have been Tim Paterson, but I'm pretty sure it was Brock. Whichever one of them was friends with Allen before Microsoft licensed, and later bought, QDOS from Seattle Computer Products.)

    3. Anonymous Coward
      Anonymous Coward

      Re: Well duh

      > Hence actuaries doing one-off calculations us APL or matlab or such while embedded programmers use assembler, C or whatever.

      The only actuary I know uses Excel (because at the end of the day stuff has to be reported to management) which he checks using an HP-12C financial calculator.

      I only mention this to gratuitously mention the HP-12C which everyone here should have at least one of. And if those Economists were to use one instead of whichever language tops the poll in the paper, they would be (a) less wrong in whatever figures they concoct and (b) all-round nicer human beings. Yes the 12C is that good. Buy one now...

  3. Pen-y-gors

    Treasury Economic Model?

    I thought that was actually quite simple -

    1) Run program

    2) box pops up on screen "Please enter the answers required by your political masters"

    3) Type in required answers

    4) Print out results (see 3 above)

    5) Send output to political masters who then use it to prove whatever lies they are currently peddling.

  4. Bruce Hoult

    pings

    Python sounds reasonable. But I'd be likely to do it as a cron job, written largely in bash. It might well call a bit of python for some reason or other.

    p.s. I am well acquainted with optimising the sh*t out of things in C (or asm if necessary) having, for example, been involved in writing a commercially successful (right up until iPhone and Android came out) compiler to turn Java games into native code on cruddly little feature phones with 1 MHz CPUs and as little as 400 KB of RAM.

    1. This post has been deleted by its author

    2. BlueGreen

      Re: pings @Bruce Hoult

      (actually reads paper, withdraws muppet post. sorry all)

  5. James 51

    I just bought another world for my playbook. I wonder if that means the port has more or few resources at its disposal.

    1. This post has been deleted by its author

  6. frank ly

    re. Another World

    Is that a buffalo with a frickin' laser strapped to its groin?

  7. Anonymous Coward
    Anonymous Coward

    > They code the same model, the same algorithms (without changing them at all even for the known differences with which different languages can handle them)

    Is this this even possible without skewing the whole test to the point of meaninglessness?

    Or at least being even more meaningless that it already was?

  8. Anonymous Coward
    Anonymous Coward

    From (personal) experience

    The choice of language often has less to do with rational economic analysis, and more to do with what the customer, technical architect/technical assurance/standards committee and management's personal biases currently are.

    Speaking as a one-time VBA, VB.NET, C#, JavaEE, BPEL and Python developer. Still at least it gives us code monkeys variety and a broad knowledge base!

    1. This post has been deleted by its author

  9. Paul Kinsler

    If this was such a dull & almost-pointless paper ...

    ... why didn't you choose to write about an interesting one instead? I mean, we all do like to sneer at economists a bit, if even only for entertainment, but actually I was hopng for something fractionally more informative.

    1. John Smith 19 Gold badge
      IT Angle

      Re: If this was such a dull & almost-pointless paper ...

      ". why didn't you choose to write about an interesting one instead? I "

      This was the only one he could find with both economics and programming?

  10. This post has been deleted by its author

    1. Tom 38
      Thumb Up

      Re: Hire Tim Almond

      Are you hiring?

  11. Evil Auditor Silver badge
    WTF?

    Our focus on speed should not be taken seriously.

    Nuff said.

  12. scrubber

    crazed models of how macroeconomists think the real world works

    models of how crazed macroeconomists think the real world works

    FTFY

    No word on optimisations, multithreading, support, maintainability, interoperability, ease of modification, ability to strap it to a pretty GUI or anything else that in the real world should determine which code should be used to do models that could, push come to shove, be done with a frigging glassblower having a coughing fit in the 40's.

    http://en.wikipedia.org/wiki/MONIAC_Computer

    1. Kepler
      Go

      Re: crazed models of how macroeconomists think the real world works

      (1) Great link! I never heard of MONIAC before. And what a wonderful, parody name!

      (Wikipedia speculates that "MONIAC" may have been a variation on the name "ENIAC" — a cross between "ENIAC" and the word "money". May have been? May??? How can there be any doubt?)

      (2) Talk about "hydraulic Keynesianism"!

      I thought it was just a figure of speech, and a metaphor, An apt term to describe how the Keynesian model works, or at least how many believers in Keynesianism conceive and understand the model. I had no idea anyone ever actually built a working, physical model!

      (The term survives today mainly among critics of Keynesianism, as a pejorative, but it was originally used as a purely descriptive term, even by proponents. And it remains an accurate description of how the Keynesian model works.

      Or at least of how the algebraic models taught in Lord Keynes's name work. Different writers construe Keynes in different ways, and there are passages in his General Theory to support each view. Including the hydraulic one.)

  13. John Smith 19 Gold badge
    FAIL

    "Premature optimization is the root of most evil."

    So economists are obsessed with program efficiency?

    So much of their thinking becomes clearer.

  14. J.G.Harston Silver badge

    "They code the same model, the same algorithms (without changing them at all even for the known differences with which different languages can handle them)"

    Yeah, I bet they did the equivalent of comparing string slicing by comparing

    result$="":FOR a=start TO length:result$=result$+FNmiddlecharacter(source$,a):NEXT a

    instead of just using MID$()

    I've seen people painstakingly write a substring comparison function instead of just f***g using INSTR().

  15. Bruce Hoult

    dead simple programs are equally fast to write in any language

    If you look at their code, it's just some statically allocated multidimensional arrays, some simple nested loops (corresponding to array dimensions), and a small amount of arithmetic and occasional if statement.

    The C code doesn't give you the safety net of array bound checking, and the for loop syntax is annoying to the uninitiated, but there's really not a lot of difficulty in using C or C++ in this way. Certainly the development time difference between it and Python would be absolutely minimal.

    Interesting that their test Mac is a 2.3 GHz quad core i7 (model unspecified), while the Windows machine is a quad core i7-3770 at 3.4 GHz, and yet the execution times are pretty much identical or even faster.

    The conclusion from that is the Mac is a laptop with massive SpeedStep taking it up to to around 3.3 or even 3.5 GHz (current 15" Retina MBP).

  16. Anonymous Coward
    Anonymous Coward

    This was peer assessed?

    I stopped reading when they were confusing speed with execution time in the section on Julia.

    1. Kepler
      Headmaster

      Re: This was peer assessed?

      Actually, it was not. As the actual NBER version* of the paper says:

      "NBER working papers are circulated for discussion and comment purposes. They have not been peer-reviewed or been subject to the review by the NBER Board of Directors that accompanies official NBER publications."

      And mistakes of the sort you noticed pervade the paper. Nearly every sentence in their summary of results is false, due to errors of English usage,** number transcription, or even math.

      .

      * Tim linked to the version on the U. Penn. Web site — now dated July 26 (Tim's article was posted July 16!) — not the June "working paper" on the NBER Web site. (It just says "June" — no date.) Despite its description in the article, the version Tim linked to is not identified with the NBER in any way (although one of its two co-authors is). Nor does it contain the highly pertinent two-sentence disclaimer that the version on the NBER's Web site has.

      The earlier, NBER version of the paper can be found here:

      http://www.nber.org/papers/w20263.pdf

      It contains numerous mistakes, only some of which were corrected in the later version. For instance, compare the statement that "Matlab is only 1.24 . . . times slower" to the values shown in the Table 1. Matlab running under Windows, using Mex files, takes 29 percent longer to execute (1.29 times as long), not 24 percent.

      (And in this instance the number calculated in the table is correct.

      Unless of course they made transcription errors like this in recording their data as well as in copying it from the table. All of the errors I saw in the earlier version that were corrected in the later version were transcription errors like this one — discrepancies between the number shown in Table 1 and the value reported in the summary.)

      .

      ** The authors not only confuse speed with execution time, but routinely and repeatedly confuse, e.g., "X times as slow as" (or, more properly — though they never get this right — "X times as long as") with "X times slower than". Even if we concede that, e.g., being half as fast and therefore taking twice as long is/are equivalent to being twice as slow, that's 100% or 1 time slower, not 200% or two times slower. They seem to get that 1.05 and 1.07 are 5 and 7 percent greater than 1 (though even there, they make the mistake of thinking that that means that 1 is 7 percent less, or faster, than 1.07, which of course it is not), not 1.05 (105%) or 1.07 (107%) times greater than 1, but they turn around and say (for example) that 2.64 and 2.70 are 2.64 or 2.70 times greater than 1. (They are 2.64 and 2.70 times as great as 1, respectively, but only 1.64 and 1.70 times greater than 1.)

      Like the icon's label says, Pedantic grammar nazi alert!

      But seriously, this is a case where what they say means something different from what they intend. And we can tell that they meant something different from what they said by looking at the numbers in the table. Only when we get to Python, R and Mathematica does the difference between what they said and what they meant become insignificant, because the execution times are so much longer than those for C++ anway.

      And they make math errors, too.

  17. Tom 7

    Oh dear...

    just ignoring parallelisation makes the whole thing a bit moot. Most of the languages listed would have access to the well written and well tested libraries that should be doing the donkey work here.

    Even this creaky old laptop has two cores on it and you would like to think people 'doing economics' might like to make their code 'future proof' as well as reality proof.

    1. Grikath
      Devil

      Re: Oh dear...

      reality proof? economics? Bwahahahahahaaaa!..

  18. Mage Silver badge
    FAIL

    Baffling

    Of course most economists seem to be as smart as barrel of drunk monkeys. Most of the things they thought were good resulted in the 2008 crash.

    (The 1930s crash was probably due to punitive post WWI sanctions on Germany insisted on by France. Which UK and USA knew was a bad idea but didn't prevent. The resultant collapse of German banks caused the 1930s depression.)

    I can't see this paper being of any value to programmers, managers or economists except as a warning that someone has "lost the plot".

    1. 100113.1537

      Re: Baffling

      "Of course most economists seem to be as smart as barrel of drunk monkeys. Most of the things they thought were good resulted in the 2008 crash."

      Nope. It was economists who predicted that the policies being developed in the late 1990's would create the potential for a crash and the politicians who said "screw that, we will make cheap loans available to people who can't really afford them because we will get re-elected a couple more times before the crash happens". Remember this had happened before with the "Savings and Loans" (i.e. banks) crisis in the 80's.

      Interestingly, there is a case to be made that the property boom years from 1998 to the crash of 2008 actually lifted more people out of poverty than fell back into poverty during the crash. If, in 1998, you were given the choice of slow growth for 10 years or fast growth followed by a crash which one would you pick? Which one gives the best overall result? Remember to add in not just the final GDP/head at the end, but how quickly that GDP/head was reached and the cumulative number of years of increased GDP/head.

      Boom/bust cycles only hurt during the bust.

      1. Anonymous Coward 101

        Re: Baffling

        "Boom/bust cycles only hurt during the bust."

        No shit Sherlock. And jumping from an aeroplane without a parachute only hurts if you make contact with the ground.

    2. Zippy's Sausage Factory

      Re: Baffling

      Yes, the root cause of the 1930s Weimar Republic crash was undoubtedly the punitive WWI sanctions, but had they not decided on the clever wheeze of printing lots and lots of cash to buy their way out of trouble - and to be fair, it worked for a few months - everything would have been a lot less painful. (Interesting that we recently did a smaller version - quantitative easing.)

      Also, I'm amazed you didn't mention the 1929 Wall Street crash - massively exacerbated by people buying "on the margin" (i.e. borrowing money to buy shares). Interesting that that problem has come up again as well.

  19. kmac499

    No-Bell Prize in the offing

    The really ground breaking thing about this paper is..

    A bunch of economists decide to study something

    Find it's quite hard; so decide to simplify it. ( To get the essence of the system..)

    Pick One easy to measure parameter. ( which may well be totally pointless)

    Produce a whacking great paper to send to their mates (sorry academic peers..)

    The really amazing thing at the end they admit this might not be the only thing that matters..

    Guys just take that thought back to your day jobs befor you do any more damage to the rest of us.

  20. scrubber

    Economists

    I got out of economics when I realised that when an economist tests their model against the real world, it isn't their model they think has the problem.

    1. HmmmYes

      Re: Economists

      A while ago, after listening to loads of BS on a companies fine-tuned econometric model, put together by top Economics PHDs, I asked to for a demo/look. Out trundled an Excel spreadsheet, full of errors.

      You may laugh but I was talking to someone a while ago who claimed - and I have no way of verifying the claim, one way or another - that the UK's Treasury and BoE were pretty much the same.

      Still, they do wear a suit and sound cleverer than me.

  21. Anonymous Coward
    Anonymous Coward

    So let me get this straight, the writers of the paper admit that this paper is only looking at the speed at runtime, and state that you should take other things into acount when choosing a lanugage, and this idiot has a go at them for only camparing the speed and not taking otherthings into account? So what! they admit that the scope of the paper is limited to soley that, which makes complete sense for a scintfic paper as they have one thing that they can clearly messure. Perhaps they will do another paper comparing speed from blanck peace of paper to results, but that wasnt the scope of their paper. has this guy nether read a scintific paper in his life?

    1. Tom 38

      No, their paper is not on execution speed of languages, it is on choosing the best language for a macro-economic model, and yet they chose to measure a metric that is irrelevant for choosing the best language for a model, and admit as much in their conclusion.

      A "scientific" paper presumably has a hypothesis, an experiment that tests that hypothesis, and conclusions that can be derived from the empirical results of the test. To be valid science, the test must be valid in order to draw conclusions, and the purpose of being a scientist is to devise valid tests.

      In order to determine if your test and conclusions are correct, you release the information to your peers, who examine your work to see if they can determine logical inconsistencies from your experiment method or your conclusions, which would invalidate your conclusions.

      By critically examining their paper for these logical inconsistencies you can determine that their experiment did not produce valid test results, since they only measured execution time, which TFA explains is not worth measuring.

      Science. It works bitches.

      1. Anonymous Coward
        Anonymous Coward

        And written in LaTeX

        My favourite documentation system.

      2. alcalde

        >No, their paper is not on execution speed of languages, it is on choosing the best language for a

        >macro-economic model

        This is simply incorrect. The paper is not about "choosing the best language". The title simply calls it "a comparison of languages". The authors state that there is a lot of folk wisdom out there, such as "Fortran is the fastest language". The paper attempts to formalize and benchmark these claims.

        >Our goal is not to overturn the experience of knowledgeable programmers, but to formalize

        >such experience under well-described and explicitly controlled conditions and to report the

        >information to others.

        >We do not comment on the difficulty of implementation of the algorithm in each language,

        >for a couple of reasons. First, such difficulty is subjective and depends on the familiarity

        >of a researcher with a particular programming language or perhaps just with his predispo-

        >sition toward a programming paradigm.... the reader is invited to gauge that difficulty for himself. >The main point of this paper is provide a measure of the “benefit” in a cost-benefit calculation for

        >researchers who are considering learning a new language. The “cost”part will be subjective.

        THAT'S the actual point of the paper. It objectively answers a very real question people seem to be bizarrely downplaying here, which language is fastest for certain types of calculations?

        > and yet they chose to measure a metric that is irrelevant for choosing the best language for a

        >model

        How the heck is performance "irrelevant"? If it is, someone needs to talk to their brethren Delphi programmers. To every discussion of this type, they give the exact opposite answer in their official forum. I've been told again and again that it's completely irrelevant if language X takes 1/10 the lines of code or 1/5 the time to develop because, paraphrasing, the total time to develop involves design and interviews and testing and writing documentation and the time actually spent typing code is trivial. It seems like both sides are minimizing the area they perform most poorly at.

        >By critically examining their paper for these logical inconsistencies you can determine that their

        >experiment did not produce valid test results

        Well, one of us actually examined their CODE to determine valid test results, and it wasn't you. Their work was indeed valid and I'm working on implementing it in some additional languages they omitted from testing.

        > since they only measured execution time, which TFA explains is not worth measuring.

        I don't know what world you live in where execution isn't even worth thinking about, even if it has a 900X difference. In the real world wr live in, this paper performed an invaluable service of providing quantifiable numbers and confirmed fact rather than intuition and conjecture. The paper shows that Fortran isn't the king of numerical performance anymore and it shows, at least for one representative example, that those who say Python is too slow (apparently not you since speed is irrelevant to you) are also incorrect - with JIT, Python was one of the fastest programs and outperformed gcc's C++ performance on Windows. Speaking of which, it also demonstrated that choice of compiler has a significant effect on performance and that the same compiler can perform very differently on different OSes. This is all very useful information and they performed a service by validating it.

  22. HmmmYes

    So after demostrating macro-economists are not good at economic modelling

    They've now gone on and proven they are no good at programming either.

    And so the struggle for something an economist can gainfully do, with a modicum of success, goes on.

    1. Kepler
      Facepalm

      Re: So after demostrating macro-economists are not good at economic modelling

      If they were truly good economists, they wouldn't be doing macro.* And they certainly wouldn't be doing stuff like this.

      .

      * Most of modern macro runs afoul of the basic principles of economics — i.e., micro. The people who understand those principles best and most fully avoid macro 'cause they know it's hokum.

  23. Charlie Clark Silver badge

    Give them credit

    The title of the paper is misleading but they do state quite clearly.

    The target audience for our results is younger economists (graduate students, junior faculty) or researchers who have used the computer less often in the past for numerical analysis and who are searching for guideposts in their first incursions into computation.

    The focus on different implementations and compilers is revealing as is the algorithm chosen. The aims to suggest what are the best tools for scientists/statisticians. These people are often not trained scientists and will use whatever tool they know to get a job done. In some situations this will be fine and dandy but in others it will have performance will be unacceptable and they will be open for solutions: using a specialised and optimised library; using a similar high-level language more suited to their task; or a different implementation of their current on; or faster hardware. Learning how to use a low-level language is usually not on the cards.

  24. ZSn

    Mathematica, IDL

    Speaking as a *really* bad programmer, once you get into languages like Mathematica, IDL, and MATLAB you don't tend to go back (especially if you're as bad as me). Mind you what it hides is the vagiaries of the languages, mathematica (as you can guess) is good at simplifying/approximations to equations that can then be coded in MATLAB or IDL (am I the only IDL programmer here?).

    That said I know some people who did serious calculations (weather forecasting/CFD under unusual conditions) who said that mathematica could, at a pinch, do a half competent job for small simulations. The problem was it wasn't very efficient - it all depends how big a computer you have to play with.

    What's all with this python malarky... < /shakes walking stick at the neighbours kids/ >

    1. Frank_M

      Re: Mathematica, IDL

      Mathematica will compile to C++, so it can run pretty fast as long as your data structure isn't any more complicated then an array. I guess that is considered language specific.

      What surprised me most in the paper( yes I went and read it, mostly) was that the gcc compiler beat both the ifort and intel c. I would have to image that is due to auto vectorization.

  25. JLV
    Boffin

    Depends what you are optimizing for...

    Assume you are optimizing for _your_ thesis paper.

    a) Pick a simple problem and implement it in, count them, 8 languages. After the first implementation, the work shouldn't be that hard, except for figuring out basic language constructs. You can probably peruse the language shootout sites for inspiration as well.

    b) Pick a metric that is entirely objective, mathematical and easily comprehended - speed. Maintainability? Who, outside of our profession, really cares about it? In fact, judging by some code I've looked at, a fair proportion of devs don't get it either. Productivity and maintainability are arguable, speed is not.

    In any case, you can be assured that the average economics prof won't understand that runtime speed is not that important.

    c) Bask in the glory of a well-quoted and argued about paper, where even The Register seems to have had wind of it. If some of your peers disagree with your assessment, you can take refuge behind your hard numbers while refuting their objections as subjective.

    Do you really think they got that paper wrong?

    1. alcalde

      Re: Depends what you are optimizing for...

      >Pick a metric that is entirely objective, mathematical and easily comprehended

      A metric can't not be objective or quantifiable and still be useful.

      > Productivity and maintainability are arguable, speed is not.

      They said this in the paper - they're subjective. The paper talked of a cost/benefit analysis - the benefit is speed, the cost is how hard the language is to learn and use. The cost is subjective, and of course depends on what the user themselves is already familiar with. The benefit, however, can be codified and quantified and that's what this paper did.

      >In any case, you can be assured that the average economics prof won't understand that runtime

      >speed is not that important.

      How so? Other programmers tell me that run time speed is all that counts because, to hear them tell it, actually typing code is only a few percentage points of the time it takes to write a program. They tell me that it's the productivity you talk about that can safely be ignored. I personally think people are trying to ignore uncomfortable results on both ends, but hey....

  26. Herbert Fruchtl

    HPC

    Having spent a career in HPC, I don't understand this disdain for execution speed. If you write a code that will be run a million times with different input parameters, by different users, it makes a difference if you get the results from one of these runs in a day or in a month. The measure chosen in this case may be too simple, and the neglect of parallelisation is unforgivable (but then, the scope for using MPI tends to correlate with the execution speed of of a programming language anyway), but if you run a program often enough, the time it takes to write it becomes negligible. I regularly run for weeks on hundreds of cores, and a factor of two in execution speed compared to a competitor is enough to render a program useless.

    Maintainability is much less language dependent than CS types try to make us believe. "Real programmers can write Fortran in any language", as the old essay said (quite unfairly, btw.). You can write maintainable and well-documented code in Assembler, or win the code obfuscation challenge in Mathematica.

  27. John Smith 19 Gold badge
    Unhappy

    Taking the paper slightly seriously for a minute.

    If only matters how fast the model executes if it's a) Very big or b)Has to run very often.

    The C++ model runs <1sec so the worst cases at about 700x slower run in about 12 mins. IOW. So what?

    Now if you're hacking about and trying to figure out what the model should be in the first place (IE It's actual structure, not what the parameter values should be, that can be handled through hacking a parameters file and triggering a reload) then you want to be able to add and subtract elements to it at will. So on that basis maybe something more interpreted is better.

    I thought "R" is meant to be quite popular among statisticians (surely one of the heaviest crunching groups of mathematics) yet it's desperately slow to execute.

    Hell if you're talking arrays you might as well talk APL. BTW there are 2 IDL's The Interface Definition Language used in large scale web system design and the proprietary data mashing language which I'm only aware of due to the HARRYREADME files at E.Anglia U.

    Seems like something that should have died decades ago.

  28. Kepler
    Paris Hilton

    Our Methodologically Obtuse NBER

    The NBER put out a paper that was a piece of shit? Good Heavens! Say it isn't so!

    (Next thing you'll tell me is that Martin Feldstein is a vastly overrated economist.)

    1. Kepler
      Coat

      Re: Our Methodologically Obtuse NBER

      Turns out it wasn't actually a true "NBER paper" at all. That is, the paper was not actually formally published by the NBER the way its official papers are. It was merely a circulated working paper. Which carried the following explanation and disclaimer at the beginning:

      "NBER working papers are circulated for discussion and comment purposes. They have not been peer-reviewed or been subject to the review by the NBER Board of Directors that accompanies official NBER publications."

      http://www.nber.org/papers/w20263.pdf

      The later, revised and partially corrected version of the paper posted at the link Tim gave in the article — the version posted on the University of Pennsylvania's Web site rather than the NBER's — omits those two sentences.

      1. Kepler
        Go

        Re: Our Methodologically Obtuse NBER

        And while the paper suffers from numerous defects,*

        (1) the authors do acknowledge and make near the end of their paper the very same point that Tim makes in the article (about economics, and which scarce resource is the one that most needs to be economized on); and

        (2) the results they provide do appear to be a good and useful set of data points for others to use — and expand on — in the future.

        So it wasn't as bad as first appeared.

        .

        * Mistakes in the paper include: repeated errors of English usage; several errors of number-transcription between the results table in the appendix and the summary of results in the body of the paper — many in the original version, and at least one surviving revision; and even a few calculational errors within the results table (some possibly due to rounding, but some too big to be due to rounding).

  29. harmjschoonhoven
    Headmaster

    Re: Higher level languages are easier to set up at the price of slower runtimes.

    Higher level languages are easier to set up at the price of longer runtimes.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like