230 posts • joined 27 Jan 2009
Fly me away ...
It would be interesting if Ecuador were to engage a helicopter and a ship (do they have a navy?) - fly Assange off the roof of the building to a waiting ship offshore. Then the Brits would be faced with the prospect of a military confrontation to prevent the ship from leaving territorial waters (actually if they're 12 miles out, even that prospect goes away once he lands on the ship).
This could certainly be done. Even if the building is not set up for helo landings, they could use the rescue basket method. Of course there are certain issues with the UK air force as well. But would UK actually shoot down an Ecuadorian helo, just to catch this meatball?
As a bonus, the movie rights to the rescue would probably fund Wikileaks (or Assange's new Ecuador hideaway mansion) for decades.
“Works as designed” - of course, that is the usual case.
From my days of teaching Software Quality Assurance, over 70% of bugs in shipping production code were built into the design at the beginning. IIRC the intent of methods like Extreme Programming was to help catch many of these design flaws by including representatives of the “customer” in the design team and using iterative design.
There is no reason to expect that the hugely complex chip design process is very different, even though they must of necessity be much more rigorous in their design process and re-use existing modules extensively. This latter allows each module to be debugged independently over time. But the interactions between modules that is a critical performance factor in the modern chip designs must be extremely difficult to 7nderstand, much less account for in the higher level design process. And the chip design process today looks a lot more like software than hardware. Designers must depend on their CAD system (another beast of high com0lexity and its own bugs!) to correctly manage the low level interactions.
For regular software, the statistics show that if you are using reasonable design methodologies, in shipped production software there is roughly one bug in every 200 lines of code, regardless of the language. (The difference between low level and high level languages was strictly in the impact of a given bug, not the probability.) But about 10-15 years ago MS mentioned they apron about one in 70, I suspect due to their practice of hiring young hotshot SW people who had not learned defensive programming. And again, mod5 of the remains bugs in shipping code came from the design.
Perhaps most scary: less than 50% of the remaining bugs were likely to be discovered in black box testing.
Free holidays in Seattle???
First prize is a week in Seattle. Second prize is two weeks in Seattle ... in February!
Sorry Seattleites - I’m a refugee from the cold, dark, damp, rainy, cloudy, grey, depressing Northwest winters. There’s a reason why coffee is so popular there. The weather reports should include a “damp chill index” just like the wind chill. 40 degrees in drizzle & mist feels like -10 degrees in sunny dry.
Seattle is famous for that special type of rain called What Rain, a drizzly mist, as in when a miserable visitor asks, “How can you stand out here in this rain?” To which the Seattleite replies, “What rain???”
Seattle has more definitions for types of rain than any other place:
- sunny (rare to unheard of except for two days in August)
- sog (where you think your to sunny but it’s not)
- drain, or “light rain”
- hard rain (rare)
- downpour (very rare)
Re: 10 years to migrate 16000 PCs and they're going to go back to Windows ?
Funny you should mention SAP - a global company I used to work for budgeted $200 million and one year to move their US operations to SAP, and planned to move the rest of their operations in Phase 2. Phase 1 took three years and $700 million. They cancelled Phase 2. SAP stock dropped - iIIRC 20% - immediately.
The point being, if they want to run SAP they _really_ don’t understand their costs. There are even open source competitors for SAP that are more amenable to adjustmentbtontheir way of doing business. The big cost of the SAP catastrophe at my company was due to the requirement to completely revise every aspect of their existing business model to fit the SAP way of doing things. But pointy hair bosses, especially in government, are usually clueless about such things, and the communications difficulties between IT professionals and MBAs or Public Administration degree holders magnify the problem.
Alternative: spend $10 million to PAY GERMAN SW ENGINEERS TO FIX ISSUES
Imagine if the city fathers actually contributed money and/or software engineer hours to fixing every issue they have with the open source products they are using! That could make a huge difference n those projects, provide employment for local people instead of sending money to the US, and give them exactly the software they need, at 1/10 the cost!
Domain privacy is available from Registrars I, familiar with - why is this still a problem?
Except for certain TLDs such as .us, every registrar I know of offers privacy at extra cost. This works by providing a special contact cod3 that can be used by law enforcement with proper court papers to identify the real person. So worst case, it seems to me that this system could be made free, with or without default. Why is this insufficient?
Full circle dept. - drives were once used to make music
I probably still have the IBM 1130 assembler code that sent signals to the big old Winchester “washing machine” disk drives at different frequencies. The program took input in the form “AABBC+” etc. to form musical output. Output was generated by setting a transistor radio on the console above where the channel wires were routed. The signals in the wires were powerful enough to generate sounds on an A.M. radio nearby.
Needless to say this was probably not good for the Winchester drive!
Pan Am - They'llll beeeee baaack
In today's world of old logos and brands being bought and revivified, it could happen. The original Pan Am was a much more entrepreneurial company than I ever realized. Pan Am was very much a startup when it talked Boeing into building the original 314 Clippers, promising to buy them if built. (They actually only bought six of the original and six more of the 314A. https://en.wikipedia.org/wiki/Boeing_314_Clipper)
I don't know who owns the trademark today, but I could see Jeff Bezos buying the brand for a hypothetical service using Blue Origin launch vehicles for its competitor to Musk's SpaceX suborbital flight service, or even an orbital shuttle service to space stations and such.
Fun, but inedible ...
We tried growing some pumpkins from some Giant Pumpkin seeds. We didn't work hard at it, no special treatment other than removing most of the excess fruits early on. We got a good sized pumpkin - 50 lbs. or so. But IMHO it was as close to inedible as pumpkin can get. So I wouldn't recommend using for pie without an excessive amount of spices and sugar!
The classic problem for pre-internet advertisers ...
The classic problem for pre-internet advertisers was, "I know I'm wasting my money on 1/2 of the advertising I buy. I just don't know which half." The Internet fixed that problem to a great extent, and made much of the Internet more akin to the entry of a department store, where just looking at men's ties quickly brought a tie sakesperson over to "help". Unfortunately while even thstbwas too far for mostbif yes, the vendors wanted, and took, even more of our privacy.
When I leave a store (whether I bought something or not) I don't want the salesperson to follow me down the street and continue harassing me. And I don't want them to sell the information that i touched the running shoes on the way out of the store to the shoe store down the mall.
Re: I've just noticed something. No AI projecte I've seen ever used the project to maintaint itself
From my understanding and nonzero experience, every machine learning solution us domain/applications cation specific so far. That is the very lim Ted state of the art. Yes you could likely build a system that could gradually improve itself. But that is what it would be good for.
Long ago I argued that a good compiler should gradually learn to be a better compiler. AFAIK that has not happened yet. But all of these possibilities domlay before us.
I've told several people that the next generation of computer "programmers" will be more like teachers, helping baby AI to learn how to solve the problem(s). This is radically different from classic imperative or functional programming but still requires the special ability to understand machine processing from the ground up.
Also all,previous data
This key would also allow decryption of all emails, archived data, etc. that was sent out any time in the past.
We need a micropayment system
The original hypertext 'xanadu' system proposed by Ted Nelson included several features that would have made a lot of sense - transclusion and micropayments being two of the most useful.
I am not going to subscribe to 50 different publishers and pay each of them an annual or monthly subscription rate. This would cost $1000s per year. But I'm willing to pay the same amount they presently get through advertising via an anonymized service that worked with all or most publishers.
There are two long-standing models of this - YouTube used to be more or less, and maybe is going that way with their premium service - but are they still tracking? And ASCAP and BMI music services have worked with radio stations and others to automatically pay musicians and composers standard rates for songs played. This is not a complicated issue.
If I could subscribe to an general inclusive subscription service, for perhaps $10/month up to maybe $50/month, bumping up a dollar at a time depending on how much reading I want to do, that simply paid publishers for articles that I read (_not_ just clicked on by accident), and eliminated all the tracking by all the publishers that joined the service and just gave ad-free content, I would totally subscribe to that.
I'd like to know what the average revenue publishers receive on one page view, based on clickthroughs and whatever else. It can't be that much.
Re: As I've said before ...
"Some people have 20 years of experience. Others have one year of experience 20 times."
Interesting. Mercurial handles the rename nicely ("hg rename"), but it doesn't go in and edit all the places where it's referred to. But (using PHP) the autoinclude system handles that if you use a filename that matches the pattern for the class name. The autoinclude system works out the filename from the class name. So rename the class in the source file and wherever it's invoked, rename the file, you're done. I don't use git (or C, C++) so IDK other systems.
Re: So get rid of the barrel!
I think you are thinking of coilguns, which use magnetic fields to accelerate - like maglev trains and the Hyperloop. A railgun uses huge current going across the sabot that carries the projectile. Even just using the magnets to keep the pieces separated, I suspect that the current density is so high that it would quench any known superconductor.
The going-up-the-mountain launcher was a coilgun IIRC, not a rail gun. Big difference. A coilgun is technically similar to a maglev train, using sequential magnetic fields to accelerate a vehicle. A railgun pushes huge doses of current through the projectile (actually a sabot that carries the projectile. Railguns can accelerate much faster. A coilgun/maglev in an evacuated tube (see also Hyperloop), going about 45 degrees upslope to above 20,000 feet and about 100 km long could replace most or all of the first stage of a launcher.
The biggest issues, beyond the sheer building of the machine, are the sudden insertion into atmosphere (albeit less than 50%) when the thin plastic barrier at the top is breached at Mach something, and the survival of the vehicle in hypervelocity travel through the remains of the atmosphere. But it is probably doable, and if/when space launches become more than a daily occurrence, the economics might start to look pretty good.
Another issue - such a thing can only launch into one orbital plane, and it takes significant energy to change inclination.
AI == methods we haven't figured out how to do yet
Back in my Systems Science / Machine Learning period, one useful definition was that AI was defined by techniques that had not been figured out yet. At one time "Expert Systems" were considered AI. Later what we now call Machine Learning was AI. And Neural Networks, Genetic Algorithms, Cellular Automata were all considered AI according to some. Once we've taken an AI concept and turned it into a methodology, it loses the mystique and becomes just another computer tool.
I can argue that AI is thus an infinitely regressing goal, defined by the very fact that there is some aspect that we can still identify as being "not quite" real intelligence. Maybe we will always be able to distinguish between an AI and an RI due to certain mannerisms, preferences, etc. - much like we distinguish between people from different locales or ethnicities by linguistic differences.
Re: More Branson Marketing
It's a real pity that more energy, publicity and money isn't being invested in truly revolutionary work like Reaction Engines
They seem to be progressing reasonably well, having just received another chunk o' cash after successful testing of the SABRE engine concept. Something I read a couple of weeks ago gave me pause - apparently the timetable for the Skylon spaceplane is being pushed back, because certain military types are taking an interest in using SABRE for military purposes. This tells me that the military are convinced the thing actually works.
Re: Not the first to notice ...
Considering what happens when things going at these speeds meet things going at lesser speeds, perhaps "Splat!" would also be a good name!
Big difference in development costs today
So that's what killed SSTs - that they might be economic if you give them away, but there is no business case for development and building. We'll see if Beardo's people can do things differently - I suspect they'll find that they are no more able to overcome the technical and certification issues than Airbus or Boeing.
This may be one area with a big difference in costs. The original SSTs were designed by hordes of engineers with slide rules, some primitive computer calculations, and paper drawings. (I read once that Boeing had actual full size "proof" drawings for the final design of the original 747 wings, with a rolling catwalk above that the engineers could ride on while they confirmed clearances and added last minute changes.)
Today a very few engineers with advanced CAD software can design, build, and even test the entire airframe in the computer including routing of cables and hoses, in a small fraction of the time. And the wing shaping, stress management and other mechanical elements can all be optimized on the computer and the fabrication tooling automatically designed to go with it. The skins can be shaped in almost arbitrary ways, that could not even be contemplated back then.
So I'm guessing that development costs will be one tenth of what the 1960s SST designs cost to get to manufacturing. And certification will be easier, as the computer data will be available for analysis as well.
Some latest tech is much more rad-hard
Most people aren't aware of some of the latest technology, notably NanoRAM. NanoRAM is an almost ideal memory/logic technology, except for one teeny tiny detail. It's actually been used in several military satellites, but it's still expensive and difficult to make.
As I understand it, a NanoRAM 'switch' is a bent bit of carbon nanotube, which is connected to one side of a circuit, plus a 'landing zone' which is connected to the other side. The nanotube can be bent (magnetically? I forget) to either bend over and connect to the landing zone, or to straighten and disconnect. In either state it is completely static, needs nothing to maintain the state. The only time it is sensitive to radiation is in the nanosecond during which the switching is in progress. Its switching time is much faster than silicon, the density is much higher, and the switching power is much less, and the power required to maintain state is zero.
From what I've heard and read, the problem of making consistent, repeatable nanotubes has been the real issue, which has prevented this technology from becoming a common replacement for both dynamic and static RAM in computers. But its value in satellites may be unsurpassed.
I for one would like to know if making the nanotubes might be easier in microgravity. If so, then this might be a technology that both enables and depends on space development. Caveat: I only know what I've read in Wikipedia and online articles, and discussions with folks who know a little more than I do.
Picture reminds me of those big afro wigs
Obligatory Gary Spivey: http://www.thewigmall.com/wp-content/uploads/2010/04/GarySpivey.jpg
Re: Can only
From just what i read here in this article the actual manufacturing company does not seem to be a defendant? If so, I wonder why. It seems to me that the company's policies should have prevented this if they were followed.
Re: "Oh, and F4 Phantom FTW for modern."
F-104 Starfighter - I recall that the only "airplane" with a worse glide ratio was the Space Shuttle. Was it 1:3? I recall that they used starfighters with wheels down as escorts on early landings of the shuttle. It was the only plane that could fly both fast enough and badly enough to stay with it.
Re: The A380 is not ugly!
Bricks can fly, it all depends the the size of the engine!
HME will be enabling tech for agents and "uploading humans" - transhumanism
HME will probably always require 100-1000 times as much CPU power and possibly data space as unencrypted computation. But it will be an essential tool for maintaining the internal privacy and security of "agent" systems traversing the internet. Without it, while data at rest may be encrypted it is still in plain text while in memory for processing. Since an agent has no way to predict or restrict what processors it is being run on - in fact not even whether it is on a real processor or a virtual one, those processors may be on compromised services that could be reading that memory and tracking the computation.
The only way that has been proposed to protect such agents from compromise is homomorphic encryption, which allows the entire data collection that represents the agent can be kept in encrypted form at all times, even when it is running its computation processes. (In fact I would expect a higher degree of encryption for the data at rest, and a less-secure simulacrum used for the processing phase. This may be a necessary compromise.)
IOW, if you have uploaded your mind and personality to the Net, that "evil" processor could be reading your mind and even erasing your memories and substituting new memories. But agents have many other practical purposes.
The two most important factors in preservation of the internal integrity - identity - of any system are privacy, and protection against undesired or unnoticed modification from external forces. Only HME has this capability.
As a side effect, this requirement will drive another wave to higher performance and capacity. An individual encrypted agent might require from one to 100 petabytes of storage and equivalent increases in computing and network traffic, within this century.
Bad, bad, bad idea
I think I said something to this effect before on the original announcement: "Moz-people, have you lost your fricken minds!" This is a pure example of the triumph of technically ignorant marketing nerds over the boring reality of how things work. Yes, it does "speak to the essence of what Mozilla does", much in the same way that parking your bicycle in the middle of the freeway speaks to its "essential spirit of transportation", with (one hopes) perhaps not quite equally bad results.
Please, Mozilla, please. Undo this very bad idea. It has as much style and attractiveness as lipstick on a pig, and (as we see) completely confuses the parsing systems on applications and services spread out all over the Internet. If you like square wheels on a Porsche, you'll love this new trademark. And, since the Moz-people involved seem to be the mechanically-uninclined type for which this warning is relevant, please also stop using pliers on your wheel nuts and trying to unscrew Phillips screws with a straight screwdriver.
So sad, I barely knew ye
I'm very sorry to hear about Lester's sudden demise. We exchanged emails and some other things in late 2015. I had subscribed to the LOHAN Kickstarter, and I had suggested he write about the Integrated Space Plan (http://thespaceplan.com), which he did in time to (no doubt) help with our own Kickstarter for the ISP poster. I sent him a copy of the poster when it came out, and I received by coincidence the LOHAN mug and the ISP mug in the same postal mail!
We are beginning to put together plans for a new edition of the poster, and I would gladly have sent him a copy when it comes out. I had hoped someday to meet him, but alas that will never be. But I'm sure he's now flown higher than LOHAN ever could!
Re: Nobody will buy them
Perhaps the solution is to build for the market. Go out and find what features and price point would be competitive for sale to other countries, get a few letters of intent, or better actual pre-orders. Then add the UK order to the list. Incorporate into the design some flexibility and/or feature models ("white sidewalls, leather seats, sea-to-air missiles, ...").
Require the design to be buildable in pieces at multiple shipyards, accurately (I've seen videos of the Koreans building large container vessels where the pieces fit together with tolerances under a centimeter, it's doable.)
Use fixed price contracts and make sure the design is complete enough to minimize change orders. If possible contract 1/2 the order to each of two vendors, and require all modules to be interchangeable. This is common practice in the auto industry, admittedly at higher volumes. But how much is the overall hull shape going to change? Building to the market will go a long way to preventing Lockheed-ization and gilded designs.
Re: Well, if a ship sank there, couldn't a building?
> Same in New York City - witness the ship under the WTC. But I believe they don't have a problem with reaching bedrock.
NYC is an interesting and useful comparison. I recently learned that, looking at a map of Manhattan, all of the high rise buildings are in two fairly small areas of the island. Buildings on the rest of the island are limited to five or 10 stories. This is because only those two areas have solid bedrock. The city does not allow super high rises on those other areas, with or without piles.
These tall skinny buildings are a special problem. A cathedral is tall, but not that tall in proportion to its footprint - the height is maybe three or four times the width. So building on a lesser foundation may cause settling, cracking, etc. but is unlikely to result in the entire structure tipping over. But a 60 story building (maybe 700 feet tall) on a 60 or 70 foot wide lot has a lot of leverage, so a tiny bit of tilt will quickly start to escalate as the weight gets concentrated on one side.
The City _allowed_ them to only sink piles 80 feet????? They will be sued as well.
In a city that is known for its earthquake hazard, and that is built on landfill, failing to require the piles for this building to extend all the way to bedrock is an engineering failure on the part of the city's building department as well as the developer. Landfill, especially near a large body of water that can provide lubrication, has a tendency to liquify under earthquake stress. There are videos online of the dirt flowing up through sidewalk cracks during earthquakes.
Both the developers and the city are in deep doo doo. And should be.
Re: With Oracle?
"While I don't mind Larry, I certainly do not want to put money into his sailing expeditions."
Actually for me that's the only thing he does that I _would_ support! :)
Re: Pint due.
> But political nuts that insist on turning any thread into political shit make me wish you could legally shoot them on sight.
To paraphrase, "If it's politician season, why can't we shoot them?" :D
> I'm not smart enough to figure out a solution (and there may not be one), but it seems to me that something should be possible.
What I'd _like_ to suggest but is actually a bad idea would be when one of these hijacked devices is identified, that the victim server could be allowed to route back to the offending device, and reset it, erasing the bogus code and setting a new random password. Then the device would still run, but the owner would be locked out of the admin interface until they reset to factory specs again (and hopefully set the user /pass to something different). Needless to say, this is a bad idea.
But either from class action litigation liability forcing a recall, and/or legislation, requiring every device to have a different factory reset password and defaulting to not allow admin access from the WAN side would solve most of these problems. And I suspect you will see ISP / cable providers taking an active role and blocking devices that they determine are susceptible. They could do thus with a quick login test when a device is first seen by their routers by detecting the device type and trying the default login. If it wirks, they block traffic from that device (or port, if on a local NAT setup.)
Re: Helpful Article
> Hopefully, all DNS sites will start caching; I wish my computer would cache the IP address of sites I visit so that I wouldn't even notice a DNS failure - it could even warn me if an IP address changes, to help prevent IP spoofing.
I have local DNS server running in cache mode on all my computers - desktops and servers. Theses are all Linux machines. IDK if Windows has that capability, but I think the default configuration for Ubuntu is to run bind as a caching DNS server if it is turned on. So then I have my net config is using 127.0.0.1 as the DNS source, and my bind configuration uses 22.214.171.124 plus another one.
One additional benefit is that when I'm on a cable connection this bypasses the cable company's default DNS that it sets up in my cable modem's DHCP config, which they use for various nefarious purposes such as inserting their own ads in websites, selling my traffic info, and "fixing" domain name typos by routing to their own advertising sites. I've seen all of those tricks at various times when visiting people who use comcast or optimum.
USAF could be help or hindrance
I'm torn. On the one hand, there are some real experts in this sort of thing in the USAF, and they could be helpful. OTOH, there are also some overrated desk jockeys and bean counters who could be depended on to stall the investigation for years. And the USAF brass has shown great willingness to lobby, intervene or sandbag in favor of their buddies at ULA.
It is telling to see which politicians are on which side. Those asking for federal involvement are largely already in the ULA pocket. Certainly if the politicians were able to get an 'investigation' going, it will take a year or two before they even get started, and in the meantime SpaceX launches will be halted, decimating their market and forcing their customers to other competitors.
With the decades-long history of bribery, political manipulation (in the US and other governments around the world), and chicanery, it is not outside the realm of possibility that ULA or its parents Boeing and Lockheed might have had something to do with this call for an investigation. I strongly doubt actual sabotage, but I have fairly knowledgeable friends who suspected that from the beginning.
Reagan had little to do with it. I was there. At the time the USPTO was so many years behind that entire product lifecycles were going by before the original patent got reviewed. (Also, no software patents, as software was based on algorithms and algorithms were math, and math could not be invented, only discovered as a fact of life. Until 1986. But that's another story.) So everybody - Congress, the President, business, etc. were whining out loud about the situation.
My company actually was working with another company to bid on the PTO's RFP for a system to scan and OCR all the existing patents and put them into a searchable online system. This would allow examiners and others to search existing patents more quickly. We ended up not bidding because of some rules for the bid, notably we could be awarded the bid, spend a couple of million on implementation, then the gov could cancel the contract and pay nothing. That was too much risk.
So the decision was made to change the rules, and allow the USPTO to default to 'award' unless they found pretty much obviously prior art in the patent office itself (not outside), and leave it to the patent applicant to defend the patent. This was widely hailed as a big step forward at the time, as it would (and actually did) cut the backlog from six to 10 years down to a year or two. But this was before software patents, widespread gaming via trivial patents and the art of patent trolling. Trivial patents have always been with us, but had not been such a serious problem, and patent trolls didn't really exist yet - a lot of this was the unintended consequence of the new rules _combined with_ the explosion of computers, making it easy both to generate new patents and search for potential victims. It's taken a while, but IMHO we are finally tweaking the new system in ways that are bringing the system back into sanity.
How is this different from rush hour everywhere?
As a furriner WRT to Virgin Rail, I wonder from afar - rush hour is always going to be packed, regardless of the transport mechanism. In fact there are multiple transport studies that basically say that when you expand capacity in one method or route, soon that capacity will be filled as more people choose that method or route. We've all seen that as well, and we can see historically that when a new highway or rail line goes in, people move to new housing to take advantage.
The other aspect is purely practical. Rush hour traffic (of whatever kind) may be four to 10 times as busy as the other 22 or 20 hours of the day. Providing infrastructure to handle any arbitrary peak traffic situation can thus cost you four times to 10 times as much as what's required to handle the overall mean traffic demand, which obviously will increase prices unless some magical government agency subsidizes (which is just hiding and time shifting the cost). This is a delicate balance, which every transport agency ever has had to deal with.
So, bottom line, how much are you willing to pay either in fares or taxes to provide a permanent 'always a seat available' capability?
Actually an argument for a public utility to own the last mile
> Broadband requires eye-watering investments but it has never been very profitable on its own, requiring cross subsidies from telephone or media services.
This is an argument to create a national or a series of state-level agencies or public utilities chartered to do nothing but build and run the fiber to the home. Then all the media companies could compete to deliver the goods, while the maintenance of the fiber itself would be completely free of the various forms of stealthy monopolistic behaviors. The utility would simply be responsible for maximising throughput, with source-agnostic quality of service.
Public utilities and government agencies are better at handling these kinds of infrastructure commitments, and are (one hopes) less likely to sully with the cross subsidies. What governments are _not_ good at is participating in markets and trying to be businesses. The plain fact is that the last mile of fiber to the home is an infrastructure problem that could be solved relatively quickly with a government commitment to an authority with the capability to make this happen, rather than throw money after rat holes trying to bribe media companies into doing this.
There is an analogy worth pondering. The rail systems in here in the US _could_ have been turned around in the late 1960s, in such a way that today's passenger trains would be fast and efficient. At that time the railroads were all teetering on bankruptcy and were bailed out with forced mergers and various other means, including some nationalization - Amtrak was one unworkable result. The alternative that would have made sense for the future would have been for the US to nationalize the rails but not the companies, turning the rails into an analogue of the Federal Highway System and allowing all rail companies to compete on service of the actual trains. As demand grew, the rail system could have been grown in the same way.
Re: Giving up on space
I wouldn't be too glum. The news, which is generally written from the perspective of the great unwashed, doesn't give a good picture of what's really going on. For starters, while NASA's budget is "only" about $15 billion (that's bigger than Hollywood), the US military space program is something over $20 billion. And while all global government space programs together are about $70 billion, that's dwarfed by the money flowing through commercial space - about $250 billion. That's mostly the commsat market of course, but still.
Meanwhile, the growth of commercial and private space activity is beginning to look very encouraging. I've only really been following closely since 2011, but in that time this area has blossomed, with ever-increasing activity, quality, successes, and business. The launch cost structure defined by SpaceX is 1/2 of the old days, and is well on its way to dropping by another 1/3 to 1/2 if/when the reusable first stage becomes the norm. This is forcing ULA for example to completely restructure their company to compete with SpaceX.
The big thing about all this is that as costs to get into space ("LEO is 1/2 way to everywhere") are reduced, the potential launch market goes up geometrically. At 1/2 the cost the market is probably at least 4 times as large. This in turn will drive higher production volumes, reducing costs further and improving reliability and dependability in the process. We are transitioning from the hand-built Hupmobile to the factory-automated Model T.
In the meantime, the technology is advancing on all fronts. That is the less well known factor of the SpaceX success - they were able to build a 'clean sheet' design for everything, using the latest in rocket technology and materials. For instance the $5 million turbo pump was replaced with a $500,000 turbo pump built in-house. There are a dozen advanced ion propulsion systems, and even some work on exotic physics. There are IIRC two companies working on new nuclear thermal rockets. (A minor aside re nanotechnology - check out NanoRAM, which is presently in use in several USAF satellites.)
Not to go on too long, but all this is trending toward an impending explosion in all aspects of space.
He's being fairly tolerant on this apparently
In my experience most groups have a defined acceptable style that is pretty strict, not a group of styles that are acceptable. In every case the group insists on using that style, period, no exceptions and code reviews include this aspect. For my part, having settled on using Doxygen for auto doc system, I've been using a tweaked set of Vim scripts that build the comment structures automatically so at least the form is there.
In line documentation is a place where the 'principle of least surprise' applies. It is important for code readers to be able to scan quickly and absorb the essence without having to interpret unfamiliar comment layouts. This is similar to how drivers may have difficulty interpreting road signs when first driving in a new location that has different signage conventions. If variable declarations are _always_ precede by a comment description, even if it is empty, then the eye picks each variable up and now knows about it.
Unfortunately the discoverers of this old data won't have the key.
I've been doing some work for the Drive Trust Alliance (http://drivetrust.com), so I'm tuned to the Full Disk Encryption / Self Encrypting Drive technology. By the end of 2017 nearly all storage will be using it.
So now I foresee a distant future when , after the collapse of human civilization, our successors, having risen to sentience and culture and having a robust archaeological science, discover this trove of human data in the Lunar Long Term Data Repository that we kindly left for future generations.
Unfortunately, all the data is encrypted, and the key is lost. Or there's a typo in the docs.
Thus speaks to a fundamental problem - such a data trove undoubtedly must contain secrets that should not be available to just anyone. But how to assure that the data is truly available in the distant future?
linked Data and triple stores?
I'd be interested to see how well Neo4J works as a triple store. The linked data /RDF protocols are based on relations in the form subject predicate object. This structure generalized to support every kind of database application easily but at the cost of cycles and storage. Ultimate flexibility has a price.
Actually their profit wasn't that great. Their problem is that their _cost_ is more than $83 million.
re: One wonders why it took so long
Pork - not really, just a classic tech disruption. The Atlas and Delta vehicles are 1960s Era technology, originally designed as ICBMs. They have been updated greatly but still. ULA and it's parents are companies whose entire business structure has been subsumed into the government contracting process, which is a highly soecialized, expensive operational paradigm. There are reasons why few small companies even try to work on government projects directly, but subcontract to big companies that have the huge paperwork mill departments and expertise to meet the government requirements and stay out of jail. (Case in point: long ago I was told by a McDonnell Douglas executive that the paperwork trail for a single DC-10 weighed as much as the airplane over its lifetime. Perhaps a small exaggeration, but we were working on a proposal to scan tgat paperwork so they could have the huge hangar back, where the paperwork was stored.)
So SpaceX has two benefits -or three - new tech that cuts the cost of mfg in 1/2, new business model that depends on computing to eliminate paper shuffles to meet USAF and federal contract requirements, and the extreme pressure on the USAF to go away from Russian parts and use Made In USA parts. ULA is more like a deer in the headlights of new business technology and rocket technology.
Re: Except for the fact that Doohan was Canadian?
A True Scot
Re: Meanwhile, at the Pork Barrel Bar..
A weird, slightly relevant example or analogy. Back in the 1990s, General Motors could go from a blank sheet of paper to a new engine design coming off the assembly line in under a year. But it took two to three years to design a new headlight or taillight and get it into production.
Sometimes it's not the skills but the interest and motivation. You can develop skills. Try going to some of the space conferences like ISDC or Space Tech Expo, meet up with the many space nerds out there, maybe work on a kickstarter, etc. My company is preparing to accept volunteers to help with new data for The Integeated space Plan (http://thespaceplan.com), adding information, curating and researching. We're making a big presentation next month at ISDC (http://isdc2016.nss.org).
Of course every company also needs janitors and other non-sexy jobs. You just hav d to be in the right place at the right time, or sufficiently persistent. I know someone who called his desired employer every week fir a year and finally got the job.
Re: Pricing's gonna change...
Iirc the second stages are nearly orbital - I know some older second and third stages are still in orbit. So I'm thinking that by simil as rly using a bit more fuel in the second srafe, after putting theid paylosds into tge proper trajdctory, those units could be boosted further to a parking orbit for later use as a resourcd. This would not work for every launch but I'm sure ut would be feasible for some launches. I r ed call some talk about returning the second stage and landing it as , but I am guessing that would only work using a 1/2 orbit (a barge 1000 or mire miles down range) or full orbit strategy. Considering the value of materials in space, I'm thinking the orbital strategy would have the best long term value.