Re: No Browsers?
Even better ask the EU to add ones for al the other stuff MS wishes to bundle that compete with third-party products: Office, Media Player, PDF reader... Windows...
10615 publicly visible posts • joined 23 Apr 2010
>I'm pretty sure that locating leaks could be quite accurate
I suspect locating leaks is relatively simple, once you've identified the small segment (sub-100 metres) of the 1000's of miles of pipe in each operators distribution network.
The trouble is with all these add-ons is that they increase the number of connections and so due to the number required, massively increase the likelihood of introducing leaks in the new Blue poly pipes...
Also s learnt with car engines and airplanes, the sensors themselves fail in "interesting" ways and so introduce a new maintenance requirement.
> but that won't even get you a down payment on a mainframe.
Back in 2002 I brought 2 top of the range Z-series for a client for a £1 each. the laugh was that my previous experience was in the non-IBM systems world, so everyone thought I would choose Sun Starfire...
Okay the annual support etc costs were another matter, but I've never been able to purchase even a single x64 server for £1, let alone 1,000.
My UK bank failed to detect as fraud the opening of online shopping accounts with two different supermarkets on the same day and the purchase of circa £300 of groceries from each for click-and-collect in towns miles apart and miles from where I lived and miles from a transaction I made shortly before the shopping fraud (and was probably the one where my card was scanned).
The ironic laugh was, talking to the fraud department they knew about the click-and-collect scam, yet their system didn't flag such transactions and they thought my wife shopped at those supermarkets when the evidence from the account was that my partner regularly shopped in person at another supermarket.
I detected the fraud when the bank sent an unexpected SMS message to tell me my account was overdrawn.
>if language has poor impedance-match with interfaces designed for C then you should either not write programs which actually use these interfaces in language
This was a problem, and probably still is a problem when combining code from different languages. Eg. COBOL, Fortran and C. I suspect combining Rust and Swift with C, given their common form doesn't present the same challenges.
>The point is simple: C is FAST in execution, but it may not be the best place to START implementing a design!
That applies to all programming, hence why professional software development organisations will have adopted a Structured Design Methodology and toolset.
>I believe this was because they had decided to support too many variants of Unix (DEC Ultrix, SunOS, HP UX, AIX, DG UX, and Windows and VMS too).
In addition to bugs this caused another problem: position in the porting queue.
I remember bidding with one vendor's Unix box and then switching to another due to slippage in the DBMS porting. This also meant platform vendors were caught in a difficult situation, particularly on big bids, where Ingres/Oracle etc. were necessary to win bids, vying with each other to encourage suppliers to give priority to their port, so that they could pass the demonstration stage.
I suspect once you get outside the Wintel/Linux PC platform, these considerations still matter.
I remember the size of output being a big talking point in the articles comparing C compilers. If memory serves me correctly one compiler gave a surprising result - an executable of almost zero bytes. On investigation the optimiser had determined that the source code performed no function as it took no input and returned no output and thus had optimised out the entire module...
Yes, there were a variety of compilers for the PC, I seem to remember Aztec-C being one of the better ones.
Not only were there language differences to trip up the unwary - I suspect some were there due to them taking differing approaches to handling the x86 segmentation model but there were also important differences in the libraries, such as what happens when you moved a file pointer beyond the end of the physical file - a condition not defined in K&R.
However, the author is just showing their ignorance. Yes porting C intended for Unix to another platform such as the PC/MS-DOS PC wasn't simple (in fact just getting the source off the Unix box on to the PC wasn't trivial) . Porting a C program from say SunOS (BSD) to NCR (System V) in the 1980's wasn't a simple recompilation and things weren't much better in the 90's, for example the Bull DPX/2 200 and 300 both ran System V on 68030 but as the 300 was SMP, everything had to be recompiled and retested (for one project we used a DPX/2 200 as a comm's processor as it had a certified X.25 card unlike the 300 at that time.
Similar issues arose with other languages such as Cobol, Fortran, Algol-60 and Algol-68, as all had vendor introduced differences and extensions.
>You think it would be a good idea to knock down parts of cities to build solar farms?
There really isn't a valid reason why the majority of UK homes don't have a 3.5KW array of panels. And if we had a government that actually invested in the UK, all of those panels could be made in the UK.
In my part of the country there are acres of new build warehousing/distribution going up, none have solar panels on their roofs (nor do they have skylights, hence are perfect for covering in panels)...
I see near Norwich some bright investor has decided it would be a good idea to cover acres of farmland/countryside (equivalent to 65 football pitches) with solar panels to provide electricity to a new business park consisting of your standard built barns...
Additionally, vertically mounted wind turbines can be usefully deployed in the urban environment.
However, the above requires a mindset that favours distributed and local rather than big and centralised; which doesn't sit well with the typical UK government.
>Its not like someone hasnt invented the grid or anything.
Remember the grid is layered, only the large offshore solar farms are actually connected to the pylon network, everything else is local - but not necessarily local to the wind farm. So near me are two wind farms, however, neither link to the grid at a point that is directly beneficial to the housing and businesses in their immediate surrounding area..,
>an experimental power plant, one I actually visited, that ran on cow manure
There is a reason why biofuel suppliers are locating their plants in farming areas.
Tomorrow Water''s idea makes logical sense, unlike your typical UK green energy project that locates fields of solar panels, wind turbines etc. miles from the purported energy consumers.
Anyone here taken a deep dive into the ARM architecture to determine it's instruction set is biased towards C or some other language and non-Unix like OS's?
I ask as from the discussion so far this platform hasn't been mentioned, perhaps because it is designed for Unix/Linux and C.
Agree you are technically correct: the 8088 was the 8-bit 8080 chipset compatible version the 8086.
Funny to think that back then the upgrade from 8-bit to 16-bit architecture was as big a jump in circuit board complexity and cost as moving from 16 to 32-bit and then 32 to 64-bit.
However, the point is IBM chose Intel...
>The idea was that the resulting machines would run Prolog natively.
Back in the 80's people weren't so fixated on one chip architecture, so you had chip designers building chips to support hihgh-level languages including the "AI" languages which naturally had a different architecture to chips designed to run conventional languages like C.
Obviously, if you look at the development of the Intel x86 family you see in each generation it provided better support for structured stack-oriented procedural languages (i186), OS's (i286) etc.
Linking to the related article on C and Rust/Swift, it would seem that Rust/Swift et al need to get closer to the chip and ensure the silicon and microcode directly supports key language features.
>The 8088 / 8086 was really a superset of 8080, hence the awful 64K segments and no 16 bit flat addressing like all the true 16 bit cpus in early 1980s.
Blame the IBM PC which used the Intel 8086 in preference to Motorola or National Semiconductor (who's architecture and instruction set was perhaps the best of the three), and the rest is history.
I don't remember K&R saying very much about design, in part because they intended only professionals who by definition should know what they were doing and hence know you need to design before you code...
It took others, such as Alan Feuer with the C Puzzle book, to spell out just easy it was to write powerful opaque code.(*)
My favourite interview question back in the 80's to prospective employees professing to be experts in C was to ask them to walk me through:
i+++j;
There are similar pointer expressions that can catch the unwary.
(*)Back in the 80's I had a collection of these books and papers, we coded them up and included them in our tool's test suites, to confirm correct implementation of K&R.
>That is, it began as a GCC extension but is now a standard and, even if it weren't, LLVM has put a lot of work into matching GCC on Linux and MSVC on Windows for interoperability's sake.
All this is in Rust's and Swift's future, take note. ITs relatively easy to create a (new) Standard, harder to maintain it over the decades as the capabilities of system platforms improve and people use the language in ways not envisaged by the original language creators...
>Therefore, they have to support 176 different targets (x86_64-uwp-windows-gnu, x86_64-uwp-windows-msvc, ...) just to "speak C to C"
It will be the same for Rust et al unless the language is only intended for a single CPU architecture, say x86-64, in which case they won't replace C on other platforms such as x86-32 and ARM.
Looking back, probably a mistake was to define the POSIX API's using the C syntax, as per the Unix reference manual...
Goto line number was a feature of Basic.
However, on the i86 architecture you wouldn't actually know the final location until the code had been linked and located and thus whether pointers etc. had been resolved to 8, 16 or 32 bits and thus opcodes likewise selected to support 8, 16 or 32 bit operations...
Fortunately, the Intel ASM did implement labels so you could Goto <label> and the linker and locator would do the math and add the missing bits.
I think you misunderstood, the code could be expressed properly in the assembler, but not in PL/M, which was used for much of the system. However, even in assembler, the warning was don't take the function the opcode is performing on face value, read the manual and understand what it is doing with this flag. Obviously, the comments against the assembler re-inforced what behaviour was being triggered by this seemingly mismatched sequence of opcodes.
Interestingly, because this code was a key part of a safety-critical infrastructure (people would die if it failed), I used it to explore C.A.R.Hoare's ideas on formally proving the correctness of programmes and thus gained first hand experience of the assumptions and limitations of the method. So in addition to the code there were several dozen pages of formal working that explained in detail the multiple intertwined logic threads in the code.
"A programming language is low level when its programs require attention to the irrelevant."
Not sure what there is about assembler that is 'irrelevant', so it would seem Assembler, by this definition, isn't a low level language.
The paper doesn't make a case for C to be classed as a high-level language, just as someone has coined, it is a goldielocks language, just sufficiently removed from any specific machines assembler to be portable, but not so far removed that it can't be used to to achieve some assembler style optimisations.
What the paper does do is to illustrate that modern high-level programming languages don't easily map to the pipeline and cache models implemented in modern CPU's. Interestingly, I don't remember seeing an assembler that also nicely maps to these CPU attributes, basically a programmer has to know about the these attributes and design their application and write the code accordingly to extract the best performance from these attributes.
What is clear taking full advantage of these CPU attributes requires greater complexity in the code generator.
>What's wrong with Goto ?
Whilst there many good texts on this, fundamentally this is about structured programming and good practice. If you are using a high level language you should be taking into consideration all possible parameters and paths through your code and use the Gotos implicit in the While, For/Do, If, Case and '}' languages constructs. By doing this you are using the compiler to avoid the resulting code exhibiting behaviours associated with missing control flow instructions.
>It is saying that C is now an IDL
And that is because of the success of Unix and its API's.
To displace C, you need to develop and bundle with Unix/Linux native API's for other languages, specifically application orientated languages such as COBOL, FORTRAN etc.. Naturally, in developing native API's for other languages, there will be ramifications back into the Unix/Linux kernel and the assumptions arising out of the assumption that applications would be written in C; this may fundamentally change Unix/Linux and break backward compatibility.
> DEC kit is hands-down the best platform for teaching computers and networking ever invented.
wrt networking. I suggest it needs to be DECnet Phase IV or earlier, whilst I approved of the Phase V shift to OSI, Phase IV was probably simpler from a teaching perspective.
> My texts were mostly Tannenbaum.
Funnily enough, I think Tanenbaum's first edition of Computer Networks gives a better introduction to networking than the later editions that became focused, firstly on OSI and the on TCP/IP to the exclusion of practically everything else.
Cultural assumption that everything has to be A-Z left-to-right and top-to-bottom.
I suspect given the number of 'interesting' authors with surnames in the later part of the alphabet, organising your books Z-A in a left-to-right and top-to-bottom structure, means that people scanning your bookshelf would see all these authors before they see the obvious ones: Asimov, Banks, Clarke etc.
>Consider all the memory pitfalls that can catch you out in C
Remember C limits the extent to which you can 'abuse' the instruction set, for real tightrope code yes need assembler.
I remember having to squeeze some functionality into 19 bytes - that was all that was spare in the EPROM, I had got it down to 23 bytes and then discovered a flag that was barely documented in the Intel ASM documentation and wasn't used by PL/M, using this my code dropped to 17 bytes... Whilst I left clear comments in the source about the feature I was using, I didn't envy the person who would have the task of maintaining that piece of code.
Goto was considered harmful well before the rise of Pascal - a language primarily for teaching computing fundamentals that avoided the mess of Basic and the complexity of Algol. In my second year the coursework for one module was to write a Pascal compiler in Algol-68.
Whilst Pascal has largely been superceded, Goto is still considered harmful.
You raise an interesting point.
>The reason that you're stuck with a C interface is because that's what the OS is written in.
Whilst that is true, it is also not the whole truth.
Prior to Unix, other OS's did provide compilers, API's and libraries for other high-level languages such as COBOL and FORTRAN. ie. the OS authors didn't expect application programmers to use the same language as them. This is probably in part due to where Unix came from, compared to where OS/390, VMS etc. came from.
The current state of Linux and Unix owes much to people sticking with Unix as originally defined and not upgrading it into something that supported the usage of application programming languages different to the systems programming language it was written in. So in some respects it isn't 'C' that has aged or been out-grown, but the worldview behind Linux/Unix as they currently stand.
>If Linux was written in some trendy new language, would it exist damn near everywhere?
Well remember the beauty of Unix was that you only needed to right a small amount of assembler to be able to port the C compiler on to your platform and then compile Unix for your platform. So I suggest if your trend language and OS is as portable as C/Unix then the potential is there
Interesting, a computing book that seems to have aged well: first edition 1977, second edition 2006.
I suspect many joining the profession today would not give an IT book written in the late 1970's~early 1980's a second glance, thinking the information it contained to be outdated, but they would be wrong.
Whilst I'm sure the new edition does contain much useful new material, I suspect for an undergraduate course in compiler techniques the first edition is probably still more than sufficient and much easier on the pocket.
>I could even mention how useful it is to learn a language like C that forces you to learn to think more like a computer, rather than a compiler who goes out of its way to pretend you can just write plain english instead of code
Depends on what it is you are wanting to achieve. Having learnt 'C' or other similar procedural languages, non-procedural languages like Prolog can be a bit of a challenge.
Personally, I enjoyed the purity of Algol-68 as it allowed you to forget about the machine and focus on algorithms and data structures. Prolog and LISP were likewise interesting, if only to challenge the brain. Now, writing programs for highly parallel or distributed environments, that was a challenge...
> C programmers doggedly hang on to their beliefs of superiority.
I don't and I helped write one of the first C development environments for the PC back in the 1980's.
From the article it would seem Classic K&R White book C has suffered from poor standards development over the decades so the issue of intmax_t speaks volumes about the poor quality of input into the C99 specification rather than whether or not 'C' is a programming language.
There is a lesson here for the supporters of other languages such as Rust and Swift here: your language might be good and well specified today, but poorly thought out enhancement to the language specification and standard over the decades can lead to massive maintenance problems.
The 'superiority' of 'C' always was both a marketing claim to sell it against FORTRAN, COBOL etc. and something "hobby" programmers used to claim that by knowing 'C' they were somehow more professional than those that only knew VB and/or Pascal.
Also, just like Windows and Linux, if you build a new platform most these days will tend to build for Windows and/or Linux rather than develop their own new OS, likewise having built your platform it makes sense to include tools that many are familiar with - hence the 'C' compiler. Note all this is determined by marketing not C developers.
As the article makes clear, some languages better support certain types of IT problems than others, and so the best advice is to use the right tools. That's why in the 80's in addition to using 'C' I also learnt Occam and ADA. Hence my advice today is whilst you might prefer to write in Rust say, you should also learn Swift and other languages including COBOL and FORTRAN, so that you are able to intelligently select tools...
Well it seems the current HP management aren't much better than the previous, although they didn't offer such a high premium.
"Currently, there are more than 90 million rooms, of which less than 10 percent have video capability. As a result, the office meeting room solutions segment is expected to triple by 2024," HP said
Trouble is, the majority of those rooms were successfully used for Zoom, Teams etc. so explain why they need a Poly solution.
Interestingly, for a client I've spec'ed up a modern video meeting room/studio, it was very obvious that the traditional Poly conference/meeting solution wasn't fit for purpose. However, I can see some being attracted to the idea that to have a video meeting the participants need to go to a specific room and sit in a specific seating pattern.
>There's no benefit to using blockchain in a stock market.
I would suggest there is every reason to suggest that there are many negatives to using blockchain in a stock market...
The whole point of blockchain is to decentralize record keeping for transactions between parties that have no reason to trust each other.
This suggests effectively the stock market replaces central transaction processing with distributed transaction processing, which given the volumes and timing critical nature of the transactions, suggest things can only end badly...
>So I am thinking about upgrading to something newer! Since some £200 jobbie off Ebay will pay for itself within a year or so...
There was a frequent contributor a few years back on ElReg who made a case for buying new as opposed to reusing old servers because of the energy efficiency improvements and so the price difference would tend to pay for itself through energy bill savings.