* Posts by BobC

42 publicly visible posts • joined 23 Nov 2009

Lessons to be learned from Google and Oracle's datacenter heatstroke

BobC

Re: Subheading

Yeah, but the heading and subheading should have been swapped.

Sigh, This no longer is the El Reg of oldendays, that was more aggressively "Biting the Hand that Feeds IT".

Four charged with tricking Qualcomm into buying $150m startup

BobC

Re: Where were the lawyers?

As an engineer I've been on both sides of M&A due diligence efforts. Some of these have had disclosure limitations (generally regarding trade secrets) that complicated the effort, as well as other complex IP that required lots of effort to explain to lawyers and accountants to enable them to help assign an objective valuation.

The hardest part was "capturing" the human assets. Much of M&A value isn't just in existing tech, but in creating more in the future. This requires establishing the bona fides of the existing IP and how it came to be. The most direct way to trace who did what comes from the patent process, and the employee notes documenting the path to the patent.

This is often the easiest way to detect IP fraud: See who is named as a creator for parts of some specific tech, then interview them about not just the tech, but their process and timeline for creating it.

Tesla Full Self-Driving 'fails' to notice child-sized objects in testing

BobC

Road Awareness and Secondary Sensors

The underlying issue is situational awareness, or lack thereof. Tesla FSD has problems with motorcycles and children, especially at the edges of the lane.

When such things ARE detected, the correct responses are clear and proven. But none of the responses can be activated if no detection occurs!

In part, I believe this is due to Tesla's removal of front-facing radar. Not that radar itself is a cure-all, but having ANY secondary detection method (something non-camera in this case) will help, even if it is less than perfect on its own.

There are several other relatively inexpensive technologies that can help with the secondary detection role, including infrared ranging/imaging, LIDAR, ultrasonic sensors and more.

To me, this is fundamentally a sensing problem, which falls way upstream of the software, the vehicle, and the driver. If you miss noticing something, you can't avoid it.

Not Azure thing: Using MS's Quantum to schedule chats with spacecraft on the DSN

BobC

I'm just here for the wordplay.

Great headline on this one!

I've been meaning to set myself the task of collecting "the best of" El Reg headline wordplay, but the deep history here is daunting.

El Reg and I go way, way back. To the prior millennium. I'm certainly not an IT guy. I am here for the wit, and on rare occasions for the reportage.

And the space projects. Aren't we overdue for another of those?

After deadly 737 Max crashes, damning whistleblower report reveals sidelined engineers, scarcity of expertise, more

BobC

Re: In Case of MCAS: Logical Reasoning, Calculus

We did this with a bunch of values, some of which were industry firsts that we intentionally did not patent, instead keeping them as Trade Secrets. Though I've been gone from the company for a while, those are still covered under my NDA. So I don't talk about any of them in specifics, to avoid saying things I shouldn't.

Fundamentally, what we did was perform multi-variate statistical analysis on the entire streaming data set. The FAA has standardized multiple synthetic sensors, but those are simply physics formulas with the terms rearranged to isolate the value of interest. Our approach let us tease out some bizarre and surprising relationships and correlations, after which we would determine the "number of valid bits" present (a form of Shannon Entropy).

In the case I cited, we had just 2 valid bits, which we tweaked to give us 3 operationally useful states plus an active "failure" state (crafting "good" failure state bits deserves a totally separate discussion).

Like the FCC, the FAA also has the ability to protect intellectual property while still getting certifications done. First, the DERs were shocked, to the point that they made us prove our testing actually tested what we said it was testing (more Design of Experiments stuff). Once we had them onboard, we had to do it again with the FAA, who had red-flagged our submission package. The FAA visit to our facility fundamentally changed our relationship with them, in that they used our process as a template for how to do multi-pronged certification testing (theoretical modeling, simulation, lab tests, flight tests, and deep statistical analysis) without breaking the bank or taking years.

Our process directly helped other small aircraft instrumentation companies move their instruments from the uncertified Experimental Aircraft market (including the home-built market) to full FAA certification. We loved the competition, and even worked with one of them to add their FADEC technology to our autopilot product line. We certified their FADEC, which we both treated as a straight-across trade, with no money changing hands.

BobC

Re: In Case of MCAS: Logical Reasoning, Calculus

There is the very real problem of managing multiple sensors. Sure, something close to perfection could be obtained by pricing a product out of the market. How do we ensure truly adequate safety with limited hardware?

There are two perspectives that may be combined to provide some hints.

The first, of course, is the entire field of Design of Experiments, a fundamental tenet of which is simply asking the question: "Are you ACTUALLY measuring what you THINK you are measuring?" This primarily affects repeatability of a test or experiment, but also applies big time to instrumentation.

The second comes from the extended discipline of Digital Signal Processing, specifically the notion of Sensor Fusion.

When combined, you can create what some call "Virtual Sensors". These are values derived from other values in a manner that is PROVEN to be completely independent of a sensor directly measuring the value. Mathematically, they are always inferior as their error is often the product or sum of the constituent value errors. However, even a shitty additional synthetic sensor is VASTLY better than no sensor at all!

In one case, I synthesized a truly lousy synthetic sensor that was accurate to only three states: "The Value Is Increasing", "The Value Is Decreasing" and "The Value is Steady Within 10%". That's it. Yet that single very crude (yet totally independent) value allowed us to massively improve the robustness of a vital subsystem, leading to awesome certification test results.

Another perspective is to understand how a requirement to "Always Show The Correct Value" is fundamentally and philosophically different from a requirement to "Never Show An Incorrect Value". It was to satisfy the latter requirement that my extremely crude 3-state value proved its worth: If the behavior of the displayed value and my synthesized value disagreed for more than 3 seconds, we would DISABLE THE ENTIRE DISPLAY to force the pilot to use a backup or reversionary instrument (including steam gauges).

We did not need to add a duplicate sensor within our instrument, and frankly we had neither the room, schedule, nor budget to do so. We only had to make certain we had the inputs needed to independently compute that synthetic sensor.

BobC

Re: The People ARE The System.

I've got to say one important thing: Having a rock-solid test and certification environment LET US TAKE RISKS and INNOVATE!

We did some really wild-assed hair-on-fire engineering for new products and feature development, then immediately (and confidently) slammed the prototype into an aircraft after it had passed our baseline ("Safe For Flight" or SFF) lab testing.

As an example, we needed longer full-operation backup power within one instrument. Our prior solution used a lead-acid gel cell (extremely safe and long-lived), but that approach was too heavy, too large, and had insufficient capacity for the new instrument. We didn't want to use Lithium-Ion batteries for MANY reasons (temperature profile, shipping, recharge cycles, etc.). We instead used ultracaps, and developed (and patented) entirely new ways to make them work in instrumentation and aircraft environments. Early lab tests literally burst into flames, so before initial flight tests (to pass SFF testing) we developed an all-new fire-proof insulated instrument enclosure (which was also patented).

This kind of rapid investigation, prototyping, development and deployment would be IMPOSSIBLE without ABSOLUTE TRUST in our testing environment, and our SFF testing in particular.

To be clear, a block of aluminum can be certified as SFF because it has few failure modes, every one of which can be tested to exhaustion. Our SFF tests merely ensured the instrument wouldn't kill anything outside of itself. SFF testing did NOT ensure the instrument would do anything useful!

BobC

Re: The People ARE The System.

THIS! Oh, lawdy, did I love testing our shake table fixtures until our instruments started flying (without being installed on an aircraft). Proper fixturing for vibe tests can be modeled to death, but only "shake to failure" gives the data needed to validate the fixture.

BobC

Re: The People ARE The System.

Getting independent testing groups up to speed is expensive and slow, and the sunk costs make it very difficult to change test vendors. What I recommend is in-house test performance with independent OBSERVATION and CRITIQUE.

Good, experienced external test monitoring and auditing isn't cheap, but it's both nimble and very effective. The auditors/advisors (DERs in our case) have their independence and industry reputations to consider. We've quietly "suggested" to DERs we let go (or decided not to re-contract) that they either up their game or consider another vocation. That's why we kept a very senior DER under contract: No junior DER is ever perfect, or even adequate, and we aren't the ones to make them better at their jobs. Hence the greybeard DER.

BobC

The People ARE The System.

Any system becomes a bad system once it has been compromised. And, unfortunately, no system humans will ever create will be immune from compromise. The key is for all participants to perform to clear and well understood ethical standards, and to talk about them frequently.

I've helped create aircraft instruments, after which I had to flip my hat around and help get them certified. I love writing software and doing systems engineering. I hate testing: It's boring as hell. I only want to do the testing until it passes, and to know that the pass is a good pass, not an accident or a statistical fluke. So I'm a testing asshole.

I worked with FAA DERs (Designated Engineering Representatives) to completely overhaul our testing environment, from the ground up. The equipment we used, the procedures we followed, the test artifacts we generated, the environments within which testing was performed. We used formal proofs when needed, but we much preferred to test our instruments to extremes in the lab, and with lots of flight hours on company test aircraft.

This was always part of the company culture: One of the first things said to me during my first interview was: "We are a test and certification company that happens to make aircraft instruments." Every new engineering hire, after getting up to speed on the product and its development, was then expected to make substantial and meaningful contributions to the test and certification system. I had experience in other safety-critical sectors, including nuclear power, and I was expected to bring all of that experience to bear on all of our processes, including test and certification.

We intentionally chose to not have in-house DERs. We always hired independent contractors, typically a couple junior ones and one senior-as-hell greybeard to get the best out of them. The advantage being that we could rotate DERs on each project, and learn from each other. Well, we never rotated that senior bastard: He was just too good at what he did.

A crypto-trading hamster is outperforming the S&P 500, Nasdaq, Bitcoin

BobC

I've Waited Decades For This!

Hampster Dance, The Sequel: Dancing For Dollaz!

Ref: http://hamster.dance/hamsterdance/

This drag sail could prevent spacecraft from turning into long-term orbiting junk. We spoke to its inventors ahead of launch

BobC

Re: Electrodynamic tethers are even better

This. Electrodynamic tethers are multi-purpose miracles for satellites in any Earth orbit, needing only to be made longer as the orbit gets higher.

1. Dumb-simple mode: Provide a closed loop with some resistance to dissipate magnetically induced electrical energy as heat. Some will point to the problem of making the plasma connection for the return circuit, which can be avoided by using dual tethers in opposition. This passively ensures deorbit. You could attach it to a rock.

2. Emergency power mode: When a satellite becomes unable to properly point it's solar cells, the loss of power would trigger tether deployment/activation, which can provide adequate power to permit a fully-controlled deorbit maneuver.

3. Active Propulsion Mode: An electrodynamic tether can directly be used as a thruster against the ambient magnetic field by pumping power into it, and/or as a power source for an ion thruster. Meaning not only can a desired orbit be maintained, but the disposal options also increase, including moving expired satellites to parking orbits.

There are other factors and benefits, chief among them being: Is the tether spinning or dangling?

Satellites needing to be Earth-oriented often use a "gravity gradient boom" for coarse pointing, augmented by thrusters and torquers for fine pointing. Such a boom can be designed to also be an electrodynamic tether, where it's default mode would be an open-circuit, so as to avoid degrading the orbit.

Tether mass is also a factor, with the weight changing from ounces to tons for electrodynamic tethers, depending on the magnetic field flux at the desired orbit. However, electrodynamic tethers are not the only option: Tethers can be useful beyond LEO when their mechanical properties are emphasized.

A spinning tether can grasp an object in a higher orbit and throw it to a lower orbit (steal velocity from it) while raising its own orbit and/or increasing its spin. This is an extremely efficient way to actively deorbit dead junk in LEO. However, if the tether has its own power source and is also electrodynamic, it become a self-sustaining means to toss payloads to higher orbits, even to the Moon. If there is roughly symmetric bi-directional traffic, the tether's onboard propulsion need only provide enough thrust to control its orbit, with the inbound payloads spinning up the tether for the outbound payloads.

Such "flinger" tethers can move payloads back and forth between lunar orbit and the surface, with zero velocity at the place and moment of lunar contact. In essence, such a tether would be in lunar orbit, and "walk" along the surface, where each point of contact can be a pickup/drop-off location.

Sure, tethers can be used to deorbit satellites. But that's literally just half the story.

Wireless powersats promise clean, permanent, abundant energy. Sound familiar?

BobC

Except power sats can still beam power down during terrestrial night, when local solar goes silent.

International Space Station stabilizes after just-docked Russian module suddenly fires thrusters

BobC

System Integration is Hard. In Space it is Harder.

The first mistake was having Nauka ABLE to fire thrusters before all systems had been checked out post-docking. That's like putting in a new pump motor with the power hot.

I'm wondering which Nauka thrusters were used: If the big ones, they may already be EOL (Nauka's dozen small thrusters have much longer lifetimes). The fact that Nauka used up all its fuel so quickly makes me suspect it was indeed the large thrusters that fired.

I hope the Russians (and EU?) are able to quickly send enough fuel for both Zveda and Nauka. SpaceX should be able to handle fuel needs for the rest of ISS. (I also don't know if Cargo Dragon can carry and transfer Russian fuel.)

Current situation aside, it would seem that Nauka received less that complete testing, particularly integration testing. Sorta reminds me of Boeing's Starliner. And the original Hubble mirror issue.

Space is extremely unforgiving of system integration mistakes, nearly all of which can be attributed to inadequate pre-/post-integration testing.

Have you turned it off and on again? Russia's Nauka module just about makes it to the ISS

BobC

Nauka = Science!

"She's tidied up and I can't FIND anything!"

I no longer have a burning hatred for Jewish people, says Googler now suddenly no longer at Google

BobC

Change is Hard

I had undiagnosed depression from puberty until my late 30's. It warped my view of the world, how I interpreted everything around me, leading to equally warped actions and reactions. I desperately wanted friends and relationships, but was unable to sustain them, so I'd destroy them before it became obvious. As a person, I was a piece of crap.

I wasn't totally clueless. I knew I was broken, and I felt badly about myself, which often rose to the level of self-hate. Rather than deal with it, I instead wove extravagant lies to hide behind, to appear to be a better person, one people would like.

When I finally got effective therapy, cleaned out and organized my mind and emotions, I literally became a different person. The few people close to me noticed it immediately, and cheered me on. As I improved, I started emerging into the world, started to "have a life". Tried to renew and revitalize old ties.

Only to face the wreckage I had created. My therapy had dealt well with getting me right inside, but I went running back to learn how to deal with others, especially those who had been hurt by the "old me". The "fix" was deceptively simple: I had to become a master of the apology, and of forgiveness. I also had to become able to cope with anger aimed my way, to accept it without reacting in kind. Dealing with others was much harder than the internal work I had done.

All too often there was nothing left to be repaired: Many wanted to have nothing more to do with me, for a variety of totally valid reasons. They had a view of me anchored in experience and pain, cast in stone. This was the hardest aspect of my recovery to accept, and I never really did.

Others reaching this point in their own recovery paths will turn to religion, seeking redemption and salvation. For me, that would have been like trying to wash it away, rather than living with and dealing with it.

Change is hard. History is implacable and indelible. No matter how rosy the future looks, the present is where the future and past meet and mix.

I started to look at the bigger picture, and one thing I saw was that, had I been diagnosed and received therapy as a teen, much of the destruction would have been averted.

So, yeah, I started over-sharing, broadcasting my own story in the hope it would help others, a warning of the damage undiagnosed and untreated depression can cause. In part, it was also a public apology to those from my past I couldn't reach.

I'm lucky that I had a "medical" condition amenable to treatment. Many refuse to accept that racism can also be treated. Once a racist, always a racist. A racist past is a rubber-stamp for a racist future.

Typically, a racist (well, all of us) will also have problems other than racism. Changing one won't change the rest, no matter how they are interconnected. So even if the racism does change, there is little respite until the rest changes as well.

We need to always keep change in perspective. To help it along, seeing it as a process built from many steps, never a single leap. The perception of change can often come in leaps, a flash of realization when we add up all those little steps.

When I see change like this, I have a standard reaction: "Good on ya' mate! What's next?"

Digital delinquent deletes developer's database during disastrous Docker deployment, defaults damned

BobC

Always-appropriate alliteration adds article attraction!

Also: Always-appropriate alliteration adds article attraction!

Hubble Space Telescope may now depend on a computer that hasn't booted since 2009

BobC

Cosmic rays can leave "punch-through" conduction channels.

Even in unpowered equipment, cosmic rays can cause significant damage. Fortunately, much of this damage (ionization tracks) self-heal, but the remaining damage (crystal disorder, dopant displacement) will be permanent.

Extremely careful power-on sequences can assess this damage without risking further damage from localized over-current. Briefly energize the first supply, measure it's current ramp, turn it off. If good, repeat adding the second supply, and so on until the system is fully powered.

In general, keeping redundant spares in the off state is vastly preferred to running them all the time with voting logic. Only the few most critical subsystems need that level of robustness.

Facebook granted patent for 'artificial reality' baseball cap. Repeat, an 'artificial reality' baseball cap

BobC

AR hardware should work independent of hat selection.

Since shaving my head 30 years ago, I've accumulated quite the collection of hats (including helmets), with one worn most hours of most days, switching as needed due my whim, or my environment or the hat's cleanliness.

Any hat-based AR capability must not be tied to any one physical cap. I should be able to wear my AR goggles with any of my hats, or without a hat.

Getting the goggle weight off the bridge of the nose is a noble goal. But integrating the entire device into a hat an extreme solution.

How about instead designing "hat suspension systems" for AR goggles? Where, when I choose not to wear a hat, I can use the default interface of nose bridge and/or straps.

That Salesforce outage: Global DNS downfall started by one engineer trying a quick fix

BobC

What probably happened...

Tech: The new Hyperforce install is done.

Management: Let's go live!

Tech: It will take a few days to slow-roll the DNS changes.

Management: We need to show billing this week! Can we do the DNS changes faster?

Tech: Yes, but it is considered an emergency procedure.

Management: Is it risky?

Tech: Not really. I've done it several times.

Management: I'm giving you verbal authority to do it now!

Tech: OK.

Management: What just happened?

Tech. Oops. Something broke.

Upper Management: What just happened?

Management: THE TECH DID IT! THE TECH DID IT! IT'S ALL THE TECH'S FAULT!

Tech: Sigh.

When software depends on a project thanklessly maintained by a random guy in Nebraska, is open source sustainable?

BobC

Working from Within

At each of my past three employers, one of the first actions I perform when diving into our product repos is to assess our "OSS Vulnerability" (an intentionally trigger-worthy phrase). I make a list of all external dependencies, ensure each is explicitly referenced (not just an anonymous import from some repo), check their versions, and check for CVEs against each. I also check for each OSS project license and if it is included in our releases to our customers. For OSS code that we've modified, I also check that we provide repos our customers can access (for GPL compliance, at least the base code with patches must be publicly shared, but the metadata is exempt).

I then try to approximate the overall product dependency for each OSS project used, starting with how much of it is used (mainly by execution profiling), from as little as a single function call, to being a central piece of the product architecture. I send the resulting list to Management, and ask what level of paid support we have for each OSS element we use, and if we have contingency plans for if/when each OSS project, for whatever reason, stops getting updated.

It's not what Management typically wants to hear from a new hire, but it cements my place within the organization as someone willing and able to do the due diligence and start the risk assessment. After the initial reaction, I then advocate for adding paid support for our most critical OSS components, and also provide a system for employees to create and share OSS PRs upstream without violating their employment contracts (such as by making the company itself the author/contributor, removing the engineer's name from the PR, but tracking it internally).

The biggest push-back from Management is the cost we pay to stay current with the OSS releases we rely upon, as if the churn in our codebase is somehow the responsibility of the OSS project, rather than of our use of it. Politely pointing out such illogic is the most delicate part of my effort.

The biggest battles come when we rely on very large OSS projects (say, Yocto), that are themselves amalgamations of other OSS projects. Some tweaks we make can't be publicly shared due to restrictive hardware vendor IP agreements (I'm lookin' at you, Broadcom and Qualcom), in which case we must "embarrass" such vendors into DTRT ("Doing The Right Thing") when we pass our patches upstream through them. (We even had one such vendor claim we couldn't publish our source changes under the OSS license terms because their IP license took precedence: I typically ask Management to punt such silliness to the lawyers.)

Another part of the process is to get customers on-board with our desire for OSS compliance in both sprit and letter. I have worked on bespoke products that were central and critical to a customer's very existence (not just competitiveness), making them apprehensive concerning any form of disclosure, especially intentionally. One approach that helps is presenting an estimate for the work needed to replace an OSS component with bespoke code, and the ripple-on effects of project delays and increased collateral costs. Money talks. But getting Management and the Customer to add lines to a contract to explicitly fund OSS support is always a Sisyphean effort.

The final battle is convincing the company to be a "Good OSS Citizen" by publicly sharing code that's useful to us, but not core to our products, yet would almost certainly be of general benefit. I try to make the point that, even if we can't release core code, we can at least attempt to strike a balance by releasing other code instead. I have never succeeded in getting an employer to create public repos. Not even for little things, like some scripts used to help our use of cmake be faster (better cache management), easier (automated reminders and sanity-checks) and more reliable (isolated cmake testing).

If you look at each of the above as chokepoints, then the next realization is the existence of a large "impedance mismatch" between OSS projects and their use by most commercial entities.

Are there ways to improve this situation? I'm no Manager or Corporate Officer, so I won't make remarks in those domains. I do believe the best solutions lie in evolution both in corporate structure and regulatory structures (to "level the playing field"), but am unsure what such changes would look like or how they would be implemented.

In the short term, however, we must be pragmatic, and at least try to learn about improving OSS participation by actually doing it, while ensuring minimal risk if over-disclosure occurs. From an engineering perspective, the I have suggested we ensure our software is useless without our bespoke hardware, and to make it extremely difficult to reverse-engineer our hardware. This can be done by implementing key algorithms in FPGAs rather than as executable source code, and storing the FPGA blobs in encrypted memory. Then start with limited increases in OSS participation, staying far, far, far away from OSHW, at least for the moment.

Not an ideal approach, to be sure, but it is, at least, a way to crack the door open, if only slightly.

Boeing big cheese repeats pledge of 737 Max software updates following fatal crashes

BobC

MCAS from a Systems Perspective

I was an engineer at an aircraft instrument maker while it was developing and certifying its TAWS (Terrain Awareness and Warning System) product. I worked in essentially all aspects of the product except for the design of the hardware itself. In particular, I was involved in requirements analysis, software design and development, hardware production (processes and tools), and FAA certification.

Though TAWS provided only audible and visual warnings, we were greatly concerned that we not urge the pilot to take excessive or inappropriate action, such as by announcing a warning too late, or by announcing an incorrect warning. The official TAWS specification described very well what the system must do, but did much less to define what the system must **not** do.

One of my prime responsibilities was to conceive of ways to "break" TAWS, then update our requirements to ensure those scenarios were properly handled, and then update our certification procedures to ensure those scenarios were thoroughly tested. Many of my findings revealed "holes" in the official FAA TAWS specification, some of which were significant. (Being a competitive company, we fixed them in our product, then reported them to the FAA as real or potential flaws in competing products. Essentially, we kicked the hornet's nest every chance we could. Fun for us, harmful to our competitors, and safer for everyone.)

The "single-sensor problem" is well known and understood within the avionics community. However, as our TAWS was initially an add-on product for existing aircraft, we often couldn't mandate many aircraft changes, which could greatly increase the cost and down-time needed to deploy our product. Fortunately, all aircraft are required to have "primary" and "secondary" instruments, such as a GPS heading indicator backed by a magnetic compass. Furthermore, sometimes the display for a low-priority function can be made to serve as a display for a secondary sensor when the primary fails, in which case it is called a "reversionary" display.

The sweet spot for us was that the vast majority of our initial TAWS customers would already have some of our instruments in their cockpit, instruments that already had all the inputs needed to serve as reversionary displays. Inputs that can be shared with our TAWS product.

When we looked at the whole TAWS specification from that perspective, we realized there were circumstances when "primary" and "secondary" instruments may not suffice, particularly if both relied on the same underlying technology (such as digital and analog magnetic compasses, which sense the same external magnetic field - meaning a non-magnetic way to determine magnetic heading was needed, which GPS could help provide).

I had prior experience in "sensor fusion", where you take a bunch of diverse/redundant/fragile sensors and combine them to make better "synthetic" sensors. Back in the day this was done with various kinds of Kalman filters, but today a neural net would likely be more practical (primarily because it separates the training and deployment functions, making the deployed system simpler, faster and more accurate).

So, for example, let's say all your physical AoA (Angle of Attack) sensors died. Could you synthesize a suitable AoA substitute using other instruments? Given a list of other functional sensors, the answer is almost always "yes". But only if there is a physical system component where all those other sensors meet (via a network or direct connection). We had that meeting spot, and the compute power needed for a ton of math.

But even synthetic instruments need primary and secondary instances, which meant not only developing two independent algorithms to do the same thing in different ways, but also, to the greatest extend possible, running both of them on redundant hardware. Which, again, our initial customers already owned!

This extended to the display itself: What if the display was showing an incorrect sensor value? The secondary or synthetic sensor was compared with what was actually being shown on the display. If we detected a significant mismatch, this system would simply disable the display, completely preventing the pilot ever seeing (and responding to) any bad information.

I'm concerned Boeing didn't do enough analysis of the requirements, design and implementation for MCAS: My guess would be that MCAS was developed by a completely separate team working in a "silo", largely isolated from the other avionics teams. For example, this is an all-too-common result of "Agile" software development processes being applied too early in the process, which can be death for safety-critical projects. And, perhaps, for those relying on them, including passengers, not just pilots. Yes, a company's organization and processes can have direct safety implications.

Another example: When an automated system affects aircraft actuators (throttles, flaps, rudder, etc.) the pilot must be continuously informed which system is doing it, and with a press of a button (or a vocal command) be able to disable that automated subsystem. It seems the Boeing MCAS lacked both a subsystem activity indicator and a disable button. Though it won't happen, I wish this could be prosecuted as a criminal offense at the C-suite, BoD and senior management levels.

I believe the largest problem here was all levels of aircraft certification testing: It would appear the tests were also developed in "silos", independent of their relationships to other parts of the system, including the pilot. The TAWS product I worked on was also largely self-certified, but we did so using, again, two separate certification paths: Formal analysis and abundant flight testing (both real and simulated).

The key element for FAA self-certification to be worth believing in relies on the FAA requirement for aviation companies to work with FAA-credentialed DERs (Designated Engineering Representatives). DERs are paid by the company, so there is great need to ensure no conflicts of interest arise. On our TAWS project, the first DER we worked with was a total idiot, so we not only dismissed him, but requested the FAA strip his credential and investigate all prior certifications he influenced. After that incident, we worked with a pair of DERs: One hands-on DER who was on-site nearly every day, and another God-level (expensive) DER who did independent monthly audits. We also made sure the two DERs had never previously worked together (though the DER community is small, and they all know each other from meetings and conferences).

Did Boeing work with truly independent DERs? I suspect not: There are relatively few DERs with the experience and qualifications needed to support flight automation certification. Which means "group think" can easily set in, perhaps the single greatest threat to comprehensive system testing. I predict several FAA DERs will "retire" very soon.

Even from reading only the news reports, I see several "red flag" issues the NTSB and FAA should pursue as part of the MCAS investigations.

Bottom line, the Boeing Systems Engineers and the FAA DERs have well and truly dropped the ball, not to mention multiple management-level failures. I'm talking "Challenger-level" here. Expect overhauls of both Boeing and the FAA to result. Expect all certs for all Boeing products still in the air to be thoroughly investigated and reviewed by the NTSB, NASA, the EU and China/Russia. Do not expect any new certifications for Boeing for perhaps years.

Sorry, we haven't ACLU what happened in sealed 'Facebook decryption' case, but let's find out

BobC

ACLU as "a clue"

Perhaps I'm the only one not to have previously seen the above spin on ACLU being pronounced as "a clue". I'm ashamed to admit my brain went briefly into lock-up as I repeatedly re-read the headline until I slowed down enough to interpret it correctly. Sigh.

In any event, this is more evidence for how much I rely on El Reg to keep my word-play wits sharp, and to fill in the gaps when needed.

30-up: You know what? Those really weren't the days

BobC

Gray Hair Everywhere...

So nice to see my fellow fossils reminiscing. Thought I'd toss in my own $0.02.

I wrote my first program while in high school in 1972. On a "retired in place" IBM 1404. In FORTRAN IV. Using punched cards (still have a couple of the decks). Lots of blinkenlights.

Joined the US Navy after high school. Learned to program the venerable CP-642B. In machine code. By pushing front panel buttons. To. Load. Each. Instruction. Until we wrote a program to access the paper tape reader. Then we were able to enter machine code on paper tape. The punchings from the paper tape fell into the "Bit Bucket". S'truth!

Got my first PC in 1979, an Apple ][ Plus. With the Language Card and UCSD Pascal. And a 300 baud Hayes modem. Soon upgraded the 5.25" floppies to massive 8" drives. I may have filled one side of an 8" floppy. Maybe.

Was so impressed with Pascal (compared to BASIC) I left the Navy in 1981 and went to UCSD. Hacked on 4.1BSD, primarily on the brand-new networking stack, particularly sockets. In minor ways I helped convince folks to skip 4.2BSD and go straight to 4.3BSD. l also got to work on quorum-based distributed filesystems. Spent lots of time on the Arpanet (before and during the Milnet/Internet split). Graduated in 1986.

Entered industry and immediately specialized in embedded real-time systems: Instruments, sensors, and control systems. Generally avoided systems that interacted with humans, focusing more on M2M. Devices for things like nuclear power plants, nuclear subs, aircraft, UAVs, ultra-high-speed digital video cameras, satellites, and much more. 8-bits at the start, multi-core 64-bits now. Boxes are smaller and don't get so hot any more. Haven't burned a finger in over a decade.

Wonderful toys, each and every one of them. Couldn't imagine having more fun. And I get paid to do it!

My only career goal has been to stay out of management. Mostly successful at that, but not always.

I'll turn 62 next month, and I have no plans to retire. I'll keep doing this until they pry the keyboard from my cold, dead hands.

Do not adjust your set, er, browser: This is our new page-one design

BobC

Some preferences for reading El Reg...

1. No stock photos, please. Only use photos if El Reg or the story source takes them. Consider providing an "opt-out" for all images on the front page (keeping them only in the full story, or shrinking them to icon-size).

2. No space wasted on white space: Use only the minimum needed to keep stories apart (one or two "emm"s should do, perhaps with a thin line).

Your real "value added" isn't just what you originate or gather (many do that), but how you share it, including both spin and snark. I work in an area about as far removed from IT as one can get, yet I continue to read El Reg for the 10% of stories that matter to me directly, and the simple delight of reading the El Reg presentation of the other 90%.

In particular, I think El Reg should revive and expand its space program.

HPE supercomputer is still crunching numbers in space after 340 days

BobC

Using COTS instead of rad-hard devices.

Electronic components hardened to tolerate radiation exposure are unbelievably expensive. Even cheap "rad-hard" parts can easily cost 20x their commercial relatives. And 1000x is not at all uncommon!

There have been immense efforts to find ways to make COTS (commercial off-the-shelf) parts and equipment better tolerate the rigors of use in space. This has been going on ever since the start of the world's space programs, especially after the Van Allen radiation belts were discovered in 1958.

I was fortunate to have participated in one small project in 1999-2000 to design dirt-cheap avionics for a small set of tiny (1 kg) disposable short-lived satellites. I'll describe a few highlights of that process.

First, it is important to understand how radiation affects electronics. There are two basic kinds of damage radiation can cause: Bit-flips and punch-throughs (I'm intentionally not using the technical terms for such faults). First bit-flips: If you have ECC (error-correcting) memory, which many conventional servers do, bit-flips there can be automatically detected and fixed. However, if a bit-flip occurs in bulk logic or CPU registers, a software fault can result. The "fix" here is to have at least 3 processors running identical code, then performing multi-way comparison "voting" of the results. If a bit-flip is found or suspected, a system reset will generally clear it.

Then there are the punch-throughs, where radiation creates an ionized path through multiple silicon layers that becomes a short-circuit between power and ground, quickly leading to overheating and the Release of the Sacred Black Smoke. The fix here is to add current monitoring to each major chip (especially the MCU) and also to collections of smaller chips. This circuitry is analog, which is inherently less sensitive to radiation than digital circuits. When an abnormal current spike is detected, the power supply will be temporally turned off long enough to let the ionized area recombine (20ms-100ms), after which power may then be safely restored and the system restarted.

Second, we must know the specific kinds of radiation we need to worry about. In LEO (Low Earth Orbit), where our satellites would be, our biggest concern was Cosmic Rays, particles racing at near light-speed with immense energies, easily capable of creating punch-throughs. (The Van Allen belts shields LEO from most other radiation.) We also need to worry about less energetic radiation, but it's less than the level of a dental X-Ray.

With that information in hand, next came system design and part selection. Since part selection influences the design (and vice-versa), these phases occur in parallel. However, CPU selection came first, since so many other parts depend on the specific CPU being used.

Here is where a little bit of sleuthing saved us tons of time and money. We first built a list of all the rad-hard processors in production, then looked at their certification documents to learn on which semiconductor production lines they were being produced. We then looked to see what other components were also produced on those lines, and checked if any of them were commercial microprocessors.

We lucked out, and found one processor that not only had great odds of being what we called "rad-hard-ish" (soon shortened to "radish"), but also met all our other mission needs! We did a quick system circuit design, and found sources for most of our other chips that were also "radish". We had further luck when it turned out half of them were available from our processor vendor!

Then we got stupid-lucky when the vendor's eval board for that processor also included many of those parts. Amazingly good fortune. Never before or since have I worked on a project having so much luck.

Still, having parts we hoped were "radish" didn't mean they actually were. We had to do some real-world radiation testing. Cosmic Rays were the only show-stopper: Unfortunately, science has yet to find a way to create Cosmic Rays on Earth! Fortunately, heavy ions accelerated to extreme speeds can serve as stand-ins for Cosmic Rays. But they can't easily pass through the plastic chip package, so we had to remove the top (a process called "de-lidding") to expose the IC beneath.

Then we had to find a source of fast, heavy ions. Which the Brookhaven National Labs on Long Island happens to possess in their Tandem Van de Graaf facility (https://en.wikipedia.org/wiki/Tandem_Van_de_Graaff). We were AGAIN fantastically lucky to arrange to get some "piggy-back' time on another company's experiments so we could put our de-lidded eval boards into the vacuum chamber for exposure to the beam. Unfortunately, this time was between 2 and 4 AM.

Whatever - we don't look gift horses in the mouth. Especially when we're having so much luck.

I wrote test software that exercised all parts of the board and exchanged its results with an identical eval board that was running outside the beam. Whenever the results differed (meaning a bit-flip error was detected), both processors would be reset (to keep them in sync). We also monitored the power used by each eval board, and briefly interrupted power when the current consumption differed by a specific margin.

The tests showed the processor wasn't very rad-hard. In fact, it was kind of marginal, despite being far better than any other COTS processor we were aware of. Statistically, in the worst case we could expect to see Cosmic Ray no more than once per second during the duration of the mission. Our software application needed to complete its main loop AT LEAST once every second, and in the worst case took 600 ms to run. But a power trip took 100 ms, and a reboot took 500 ms. We were 200 ms short! Missing even a single processing loop iteration could cause the satellite to lose critical information, enough to jeopardize the mission.

All was not lost! I was able to use a bunch of embedded programming tricks to get the cold boot time down to less than 100 ms. The first and most important "trick" was to eliminate the operating system and program to the "bare metal": I wrote a nano-RTOS that provided only the few OS services the application needed.

When the PCBs were made, the name "Radish" was prominently displayed in the top copper layer. We chose to keep the source of the name confidential, and invented an alternate history for it.

Then we found we had badly overshot our weight/volume budget (which ALWAYS happens during satellite design), and wouldn't have room for three instances of the processor board. A small hardware and software change allowed us to go with just a single processor board with only a very small risk increase for the mission. Yes, we got lucky yet again.

I forgot to mention the project had even more outrageous luck right from the start: We had a free ride to space! We were to be ejected from a Russian resupply vehicle after it left the space station.

Unfortunately, the space station involved was Mir (not ISS), and Mir was deactivated early when Russia was encouraged to shift all its focus to the ISS. The US frowned on "free" rides to the ISS, and was certainly not going to allow any uncertified micro-satellites developed by a tiny team on an infinitesimal budget (compared to "real" satellites) anywhere near the ISS.

We lost our ride just before we started building the prototype satellite, so not much hardware existed when our ride (and thus the project) was canceled. I still have a bunch of those eval boards in my closet: I can't seem to let them go!

It's been 18 years, and I wonder if it would be worth reviving the project and asking Elon Musk or Jeff Bezos for a ride...

Anyhow, I'm not at all surprised a massively parallel COTS computer would endure in LEO.

Mobileye's autonomous cars are heading to California. But they're not going to kill anyone. At least not on purpose

BobC

Avionics is not a good comparison.

A prior comment stated that autonomous vehicle software should meet the same standards as avionics. This opinion is wrong on at least two separate levels.

First, in the US, the FAA has two certification paths for avionics hardware and software. 1. Prove it is accurate and reliable (typically via formal methods), then test enough to validate that proof. 2. Test the hell out of it, at a level 5x to 10x that done for the more formal path.

Small companies are pretty much forced to use the second path more than the first. Where I worked, we relied on the second path and aspired to the first. We had awesome lab and flight test regimens that the FAA frequently referred to as "best practices" for the second path.

Second, the risk of death due to an avionics failure (per incident) is massively higher than it is in cars, especially given the ever-increasing level of passenger safety measures present in modern vehicles. The fact that aviation death counts are so low is due more to the relatively tiny number of vehicles involved compared to cars (on the order of ~100K cars to each plane).

Autopilots are fundamentally simpler than autonomous driving: Fully functional autopilots have existed for well over half a century (the L-1011 was the first regular commercial aircraft to do an entire flight autonomously, including takeoff and landing). The primary reason for this achievement is the large distances between planes. In-air collisions simply don't happen outside of air shows.

The massively greater complexity of the driving environment (separate from the vehicle itself) forces the use of statistical methods (machine learning), rather than relying solely on formal, provable rules. If it isn't clear already, this means that autonomous driving systems will be forced to use the second path to certification: Exhaustive testing.

Most of that testing must occur in the real world, not in a simulator, because we simply don't yet know how to construct a good enough simulator. The simulator will always miss things that exist in the real world. One goal of ALL real-world self-driving tests MUST be to gather data for use by future simulators! Just because simulators are hard is no excuse to avoid building them. We just can't rely on them alone.

That said, all such on-road tests must be done with a highly trained technician behind the wheel. It is VERY tough to remain vigilant while monitoring an autonomous system. Been there, done that, got the T-shirt, hated every minute. In my case it was operating a military nuclear reactor. Boring as hell. Terribly unforgiving of mistakes. Yet it is done every minute of the day with extreme safely.

I'd focus on the test drivers more than the vehicles or their technology. Get that right, and we'll earn the trust needed to improve the technology under real world conditions.

Donald Trump jumps on anti-tech bandwagon, gets everything wrong

BobC

SOSDD

Same Old Shit, Different Day.

I'm a solid Centrist. I favor minimal taxes, but as high as needed to fully fund government commitments. I also favor parts of the Liberal Agenda, but only if the gains are solid AND we can afford to pay for them.

I consciously try my best to avoid the "Echo Chambers" of the Extreme Right and the Extreme Left. Both make far more errors than valid points.

But the PoTUS Tweet stream is beneath dignity on all levels. A true travesty, no matter which side of the aisle you are on, especially so for me in the middle.

The first and finest service Twitter could do for the USA would be to delete @realDonaldTrump.

'WHAT THE F*CK IS GOING ON?' Linus Torvalds explodes at Intel spinning Spectre fix as a security feature

BobC

Why we need faster MEMORY!

Our deep, many-layered memory architectures and the presence of branch prediction and speculative execution on modern processors is simply because the CPU can be idle over 50% of the time just waiting for work to do while other work is being completed. CPU cycles have become staggeringly efficient primarily due to the deep and wide processor pipelines in current architectures.

The central problem is that transistors used for storage (cache and RAM) are far more "expensive" than transistors used for logic. A billion transistors can give you a whopping CPU, but not really all that much fast storage. This is why additional RAM architectures are needed, ones that use fewer transistors and take up less space while yielding CPU-level speeds.

If all of RAM could somehow be accessed at the speed of a register and at the cost of a spinning disk, then all CPUs would instantly become vastly simpler. That was the inspiration for RISC, when CISC processors failed to keep memory buses saturated at a time when transistors were still quite expensive; Cache made more sense than logic.

This is also a motivation for moving processors into the RAM itself, rather than hanging ever more memory onto ever more complex buses connected to many-core/many-thread CPUs. Why not put cores right in the middle of every DRAM chip? That one change would greatly reduce off-chip accesses, the major cause of speed loss. Let the DRAM be dual-ported, with one interface optimized for CPU access, and the other optimized for streaming to/from storage and other peripherals.

This yet another problem illustrating just how much trouble we still have building things this complex.

It's time to "add simplicity".

Death to the North Bridge!

Another way to avoid eye contact: 4G on the Tube expected 'in 2019'

BobC

I'm Too Old: I Thought The Headline Meant 4G in My TV.

Sigh. The "boob tube" hasn't had tubes for ages.

Arm isn't saying IoT firmware sucks but it's writing a free secure BIOS for device makers

BobC

IoT Ain't There Without A VM.

I'm a real-time/embedded developer who was brought on-board to tackle late-breaking cybersec issues on a new military system that was due to be delivered ASAP.

We weren't allowed trust our external routers and firewalls, so we had to configure local firewalls. No problem. We created fully stateful firewalls that understood our M2M protocols. Again, no big problem. We fuzzed our firewalls for months of machine time. No problems. We thought we had things locked down.

Then I learned the local firewall couldn't share ANYTHING with the local application. Which, as it turns out, was what pushed for the external firewall boxes in the first place. Houston, we have a problem!

We had a low-power multi-core x86 CPU, so we added a thin hypervisor, partitioned the memory and hardware, put the firewall and app on separate cores, and begged permission to ship what we had. Permission was granted, but only after many tedious hours of phone conferences with the Powers that Be.

Turned out to be an elegant and powerful solution. One I think should be generalized to all IoT, in that the application must not be permitted any direct network access and must run in a VM. A second VM should contain the firewall and the network hardware. The hypervisor should be a dumb as possible.

So, when will ARM ship M-family processors with VM hardware & instruction support? Is TrustZone fully equivalent?

Secure microkernel in a KVM switch offers spy-grade app virtualization

BobC

After blitzing through the paper linked above, this looks mostly like a "virtual monitor" system combined with "smart" keyboard + mouse + clipboard sharing.

At its simplest, virtual monitors are commonly used in CCTV systems to map normally independent systems (each with its own monitor) onto a single monitor. This system goes a bit beyond that to 1) permit multiple desktops to overlap, and 2) extract individual windows from the desktops for display on the shared monitor (mainly to declutter the display to remove redundant desktop pixels, and adding an identification overlay).

Nothing that users of X-windows systems haven't been used to for well over 30 years. And, like X-windows, the trick is sending the window meta-data along with the content (be that pixels or graphics primitives). This information is normally sent out-of-band, such as via a separate stream, but this new system has only KVM-like connections, and so instead must use embedded pixel data to encode and convey the metadata. (BTW: This data could be vulnerable to a MITM attack or Tempest-like snooping.)

Sharing a single keyboard, mouse and clipboard across multiple PCs has been done for many years in many ways, with Synergy currently being the best-known example. The Synergy protocol is straightforward, as is the data routing. On each PC, a thin shim is used to route the single physical KM to the appropriate PC's KM input layer.

So, we are left just a pair of protocols needing to be processed, some contextual rules guiding the properties and restrictions of the overall functionality. Not really all that much functionality, though the ability to handle KM inputs and pixel-level video switching are needed, but that's mainly simple hardware with simple drivers.

Taken as a whole, an OS isn't really needed at all. Not even a kernel: This could easily be run on bare hardware. But the need for the protocols to handle security restrictions demands some of the code, specifically the clipboard code (and, perhaps a security classification overlay), be executed in a trusted and protected manner. Not much of a kernel is required (the minimum needed to provide protection and separation of one shared agent and one agent for each connected system), so a small and proven minimalist kernel would seem to be just the ticket.

Place your bets: How long will 1TFLOPS HPE box last in space without proper rad hardening

BobC

It's really Cosmic, Ray.

On Earth we routinely simulate much of the space environment with one massively significant exception: Cosmic Rays, relativistic particles with extreme theory-breaking energies and unknown origin. We have some reasonable approximations that are a PITA to use at all, and impossible to use on whole systems, as they require de-lidding chips and exposing the naked silicon to heavy ion beams.

Cosmic Rays don't care about the van Allen Belts or Earth's magnetic field. But, thankfully, they are filtered quite nicely by Earth's atmosphere, converting into cascades of other relativistic particles that include muons and pions. These particles themselves have vanishingly short lifetimes when observed in the Lab, yet when coming from a Cosmic Ray cascade, they manage to live long enough to reach the Earth's surface, all due to their startlingly high relativistic speeds.

Cosmic Rays are the The Hulk of radiation, and since we have no clue how to make them on Earth, if you want to expose your equipment to Cosmic Rays, you need to send that equipment above the Earth's atmosphere.

And not far above it either! LEO does just fine.

Factories counter-punch Qualcomm in the gut as Apple eggs them on

BobC

Licensing as a Business

I'm in San Diego, home to QC, and have been closely watching the company for three decades.

For many reasons, QC consciously and explicitly chose to aggressively balance its fabless physical chip business against its patent portfolio. The mobile market was growing faster than QC could ever handle, and becoming a one-stop-shop for phone IP and premium chipsets would sustain QC's truly enormous R&D expenditures.

QC has created much of what makes today's mobile bleeding-edge possible. Not just the modem technology, but also all the other hardware and software needed to make it all work together as a seamless whole in the customer's hand. QC doesn't just sell parts and license patents, it provides entire solutions that help phone makers get the latest tech to market sooner.

It is important to note that QC pushes prices this hard mainly for its bleeding-edge tech. Their prior-generation and lower-end solutions are competitive enough on a cost basis to keep other players fighting on the margins. The main complaints against QC center on this practice: Charging through the ceiling for the latest tech in order to subsidize the other stuff.

Have you noticed the recent batch of mid-tier phones with QC chips? My own Moto G5 Plus is a splendid example of this, with its Snapdragon 635 providing 80% of the overall performance of a bleeding-edge phone for 1/3 the price. Why pay more?

I'll bet this is what REALLY pisses Apple off. Killing QC, and it's entire approach to business, is the only way for Apple to maintain it's insane margins while also moving into the mid-market.

This is a very intentional business practice on the part of QC, one that would instantly evaporate if their R&D ever fell behind. Every year they bet their entire future on their own continued ability to out-innovate the rest of the planet. And when they gain that edge, they cling to it using every available means.

From one perspective, QC's chip business is simply convenient packaging for their IP, more than it is a desire to sell silicon.

If Apple, or anyone else, was truly upset with QC's aggressive practices, they could, like any good Capitalist, choose to shop elsewhere. MediaTek and others make phone chipsets that are only slightly behind the bleeding-edge set by QC, and have less costly terms. But they are just parts, not complete solutions.

Did you know QC has more Android engineers than Google? QC does major R&D at all levels of phone tech. QC doesn't have more IOS engineers than Apple, but it has plenty of them too.

ALL first-tier phone makers repeatedly choose QC for their flagship phones. Because it's simply the best way to get the latest & greatest phone tech to market soonest.

Apple simply wants to pay less for it. And rather than compete with QC on an R&D basis, they instead attack it on a financial/legal basis.

Apple has made so much money shipping phones with QC chips that it seems insane for it to choose to do otherwise. So it clearly wants to "break" QC. And it needs to do so NOW, before QC IP gets established in 5G, and before the mid-tier hurts them any more.

This is not "really" about FRAND or similar issues: It's about killing QC before 5G. This is a fairly narrow window.

If you review history, this is not a new tactic. It seems QC attracts lawsuits whenever it prepares to roll out new IP. Right after the phone makers have a ton of cash from profits built on selling phones with QC chips.

Strange pattern, that.

QC has repeatedly proven itself to be the best at rapidly developing new phone tech and bringing it to market. Why isn't Apple, with it's QUARTER TRILLION dollars in cash, funding R&D competition itself?

Why doesn't Apple simply buy QC and dictate the terms it wants?

Or, more to the point, why doesn't Apple simply pass on the latest QC tech and stick with older and less expensive solutions?

No, Apple has decided that the courts are the cheapest way for it to improve its bottom line.

It's that simple.

QC is not innocent in any of this: They are aggressive because they have done the R&D needed to stay in front of all competition, to set the bleeding edge. Their practices are illegal in many countries. But those countries don't create phone IP, and they don't make phone chips, so to QC they simply don't matter.

Look at what happened in India when QC chips were (briefly) banned: The entire industry revolted, and the ban quickly was replaced with a fine that was never paid. QC knows how to leverage its tech, its customers, and its entire market.

QC has encountered rough times before, and they've ALWAYS fought their way back by sustaining and leveraging their R&D prowess.

If I were QC, I'd make their next-gen chips available to everyone BUT Apple, and see what happens to Apple's suit.

Sure, QC will play hard in the courts. But they will also double-down on their R&D golden goose to get that next Golden Egg out there and into the hands of all the OTHER top-tier makers, probably right about when the iPhone 8 starts to ship. To make the iPhone 8 obsolete before it ships in volume.

Apple's tactics are strictly short-term: QC has the better long-term vision, if they can survive the short-term squeeze.

Linux 4.11 delayed for a week by NVMe glitches and 'oops fixes'

BobC

Re: It's a trap!

The whole reason I haven't done any Linux kernel contributions is the extreme abrasiveness of the process. Linus isn't at all bad, but he can verbally match what the process itself feels like.

It's MUCH easier (and less traumatic) to make a bug report (with code, mainly for drivers) than to attempt a pull request.

Sure, I don't get my name on the commit. But I'd want that only if it were important for my CV. Which may happen someday, but not today. (For now, I can trace my bug report to the commit, which is good enough.)

Revealed: Blueprints to Google's AI FPU aka the Tensor Processing Unit

BobC

NVIDIA FTW?

Though I have great hopes for ASICs like the TPU, and for the many FPGA-based ANN accelerators, as well as for upcoming 1K+ core designs like Adapteva's, the bottom line is support for the common ANN packages and OpenCL.

In that regard, the GPU will reign supreme until one of the other hardware solutions achieves broad support. Only AMD and NVIDIA GPUs provide "serious" OpenCL support, and between the two, NVIDIA is preferred in the ANN world.

An important previously-mentioned caveat is that most of the ASIC and FPGA accelerators aren't much help during training. For now, you're still going to need CPU/GPU-based ANN support.

Speaking in Tech: Elon Musk and the AI apocalypse

BobC

Vegemite - FTW!

When I first visited Australia in '87 I was introduced to Vegemite by Aussies looking for a laugh. Much to my and everyone's surprise, it was love at first taste, triggering a deep craving I never knew I had. On my own I can go thorough a 220g jar in a week. It goes in everything!

Spotted: Bizarre SpaceX rocket-snatching machine that looks like it belongs on Robot Wars

BobC

Re: Let's look at what's there...

Geez, I saw the treads, but assumed they weren't mobile enough to permit the robot to get into position precisely enough. Clearly, they're what's needed to carry the load.

The marks on the barge deck indicate wheels are in use somewhere, but perhaps not on this robot, though there isn't anything else on the deck (during the photo, at least) that could make those marks.

Perhaps small (hidden) wheels to move and position the robot, with treads to move when loaded?

BobC

Let's look at what's there...

1. There are 4 pistons, which can only engage with the 4 landing legs on the Falcon 9 core.

2. There are exposed cables on the robot, so it's not for "hot" use: The core must at least be vented (perhaps a minute after touchdown).

3. The robot must be mobile. That may seem obvious from the umbilical, but it's wheels (or treads) aren't visible. It likely moves very slowly.

Add it all up, and it seems the robot's purpose is to move freshly-landed cores.

But move them where? Why do this?

For the trip to port, it would seem best to have the core at the precise center of the barge to minimize combined pitch and roll motion. So the robot could be used to center-up a core after an off-center landing. But there have been no cores toppling over on the way home after an off-center landing, so while this use seems possible, it can't be the primary use of the robot.

As others have said, it makes sense to use a robot if another core is on its way to the barge. A robot is far cheaper than building (and managing, maintaining, operating) additional barges! Plus, landed cores are quite light: Shifting them to the end of the barge won't significantly affect it's trim, and I suspect the barge has floodable compartments to manage trim with high accuracy.

The key complication is if the second core landing fails: Two cores could be lost instead of one. So, to me, the robot indicates SpaceX's very high confidence in nailing every single landing, no matter how crowded the barge may be with previously landed cores.

Even if the cores aren't from the same Falcon Heavy mission! What if both SpaceX pads have flights on the same or sequential days? It would make huge sense to keep the barge out there either until it is full, or there is a break in the launch schedule.

Remember, SpaceX production plans allow for at least two launches per week. And that number EXCLUDES reflights, which could increase the launch rate by at least 50%. If we assume most/all are at Canaveral, then three launches per week with most cores being recovered is way more than a single barge can handle, unless that single barge can handle multiple landings before returning to port.

Now, let's look again at the case of handling a pair of Falcon Heavy cores. I think this scenario is less likely due to the time needed to permit a core to cool and vent prior to being moved. Nobody wants to be shuttling armed bombs across the deck! Even a minor mishap could take the barge out of commission for the next Falcon Heavy core, which is likely less than a minute behind the first.

The robot's most likely use seems to be to support multiple recoveries for multiple missions over a period of days, perhaps up to a week.

Zuckerberg thinks he's cyber-Jesus – and publishes a 6,000-word world-saving manifesto

BobC
Pint

Thanks!

Never before has an online publication taken such a bullet for me, rescuing me from the meanderings of a tech-gazillionaire who clearly took no liberal arts in college.

Love you all. Pints all around if I ever make it to Blightly.

LightSquared blasts GPS naysayers in FCC letter

BobC

There's always GLONASS and Galileo...

'Nuf said.

eBooks: What to read on which reader

BobC

You only scratched the surface!

My motivation for getting an eBook reader was to permit me to spend less time at my computer reading. I do a ton of reading...

I made my choice based primarily on wanting a "minimalist" device (no WiFi or cell networking) that would be rugged, long-lasting, easily fit in a cargo pants pocket, and would accept an SD card for additional storage. I especially like the current crop of 5" eInk-based readers, since they have the same 800x600 resolution as many larger readers, providing sharper text in a smaller form factor.

I'm using the Astak EZReader PRO, which understands just about every known non-DRM eBook format, in addition to Adobe Editions. My 80 year old mother purchased the same model!

The Astak EZReader PRO is a rebranded Jinke Hanlin V5, which is also sold by several others, such as Endless Horizon's BeBook Mini, and the IBook V5. Other eBook readers include: Bookeen's Cybook and ECTACO's jetBook. And the list goes on...

There are also many FREE sources of eBooks, and Project Gutenberg is only the start: Google for "free eBooks". Beyond that list, many authors and publishers provide free eBooks for their titles that are no longer in print. Crazy publishers, such as Bean (one of the largest publishers of science fiction), make nearly their entire catalog freely available as eBooks.

More authors are placing their books under a Creative Commons license, and make them available in eBook form from their web sites. Find them by searching for "Creative Commons", or simply start here: http://wiki.creativecommons.org/Books Short works such as scientific papers and whitepapers are also readily available from the authors and/or journal publishers (including Arxiv, PLoS and others).

Finally, you can always make your own eBooks! For example, the Calibre program, among its many other features, has the ability to periodically download any site's RSS feed, convert it to your preferred eBook format, then automatically transfer the latest news to your eBook reader whenever it is plugged in.

Yes, you can even get The Register on your eBook reader!