So good for another 7 generations?
At which point the gate will be 1 atom wide and 1 atom long.
Of course *how* you get there is a more difficult problem.
So end game some time between 2026 and 2033.
16327 publicly visible posts • joined 10 Jun 2009
Some excellent points.
I'd suggest a rational basis for a model would be basic physics (with *all* assumptions tested first) then expand out to start identifying the missing areas and start on actual physics and chemistry models of those, *not* using fudge factors.
How viable that is in the real world is debatable but I think fudge factors are just another facet of Dykstra's warning that premature optimization is the root of all evil.
"Then maybe we should stop trying to base world-wide policies on them at all, then."
RV Jones in "Instruments and Experiences" called that a "Doctrine of Impotence." Basically "We can't do it so let's not even look at what the *boundaries* of our ignorance are."
This is simply not an option because the *effects* of climate change (never mind what's causing them) are so *vast* on people and property that modelling to get some idea of the scale of the events *has* to be done. Insurance companies do flood modelling for example to decide what premiums they should set long term.
" it would be a set of coupled Navier-Stokes equations with unbelievably nasty nonlinear coupling terms "
It's quite instructive to see how aircraft, missile and space vehicle designers use tools that implement similar equations.
Publicly identify ranges of speed, temperature, pressure, altitude (and possibly ) body shape the tool is *known* to be reliable over, and at what point "all bets are off."
Publicly give the error bounds their tools results will have.
Run new tools (or upgraded versions) against *standard* problems whose correct results have been verified either in hardware or derived closed form equations, with results from other CFD programs as a last resort.
Warn where the tool uses *gross* simplifications of the physics. The classic being the 2 factor turbulence model, which *can* work pretty well but does not involve *any* of the underlying variables that are known to affect this.
Warn that the results need *careful* interpretation and *should* be checked against real life (in a wind tunnel or sub scale flight models).
A similar process also happens with the tools used for designing launch ascent trajectories and orbital paths.
The results have been *cautious* use of these tools and gradual *continued* improvement from a *trustworthy* base with *lots* of cross checking between tools and models and flight data or orbit observations.
There appears to have been no *systematic* review of any of the General Circulation Models. Either the "Science is settled(although in at least one case it was wrong)" or "We (IE the developers) know how to twiddle the fudge factors to make it produce the last few decades results. We'll just leave it on those settings and that's that," which is fine *unless* you have a very slowly changing factor that you just happen to have set a reasonable value of for *this* period.
Couple that with the software development practices described in the Harryreadme file of the CRU at East Anglia an you have a house of (punched?) cards.
Humans *have* altered the global climate on human timescales. CFCs and damage to and recovery of the Ozone layer bear that out. That *human* generated CO2 is a cause of *bigger* changes is an *extraordinary* claim and demands *extraordinary* evidence. The scales of these issues is so vast that GCM development *should* use our most reliable development methods and highest documentation standards.
Sadly that does not seem to be the case.
"Predictions turn out to be quite accurate, and usually conservative (ie. predicting slightly lower warming than actually measured, or predicting sea ice to disappear more slowly than it currently is doing)."
In which case either you (or the people who you've linked to) do not understand what the word "conservative" means in a *scientific* context.
A good "conservative" model in this context makes predictions that are slightly (IE at most 10s of %, rather than say 3x ) *worse* than real life.
So when you design things or base policy on them you know it will be OK as it will *always* be below prediction.
Once again the people who build these models seemed to have though they had a simple element to deal with when they actually had a simplistic model for something this is actually (a bit more) complex.
I think the fact it has taken *five* decades to review *this* assumption speaks volumes for how systematically the modellers have accepted their models limitations and worked to eliminate them.
"fudge factors" are the universes way of telling you that you do not fully *understand* the system.
But it seems the bits the modellers *think* they understand in fact they do not.
Thumbs up for *starting* to look through the fine print, but *boy* has it taken a long time to get here.
if this is a *key* transfer and is completely secure then the *main* data channel can run as fast as necessary *provided* the key is big enough (and probably 1 time use).
The GPS data channel is 50 bits a second. It depends what you want to send and where you want to send it from if a data channel is fast enough. Sometimes you don't get to choose either (New Horizons will dump its 8GB data recorder contents over a sub 1kbs channel because it's outside the orbit of Pluto).
Well done on a 1st effort.
<rant>
</profanity filter>
"Bond realised that the numbers shared 17 bits in common while the remaining 15 digits appeared to be some sort of counter, rather than a random number."
Fixed fields and *counters*.
Seriously is a shift register *that* much more expensive to implement? Has 8 *decades* of computer generation of pseudo random numbers been a total fucking wast of time?
The recurring stench of "security by obscurity" makes me want to vomit.
<profanity filter>
</rant>
"And this hard guy in government had better be careful, because if he is seen to be pouring scorn on a major IT supplier then it won't be long before other IT suppliers figure they don't want to deal with government on future contracts. "
Isn't that pretty much what already happens?
.."or are forced to use only a limited number of companies because the others won't play ball."
With the rules that HMG uses to select which companies they will allow to bid that pretty much happens already as well..
You've missed the *other* possible effect. That HMG *might* decide to re-structure it's requirements (including it's interface documentation and procurement cycle) to allow smaller players. This might allow a big enough pool of development *companies* to make benching under performers a *real* threat.
Because *real* competitive development companies cannot afford to f**k about with endless beauty pageants to be told "No, you were scored down because you did not use enough staff from Wales/Scotland/NI/Serbia/some-other-f**king-place-they-put-in-the-list"
Perhaps they *might* like to consider breaking the work down into *smaller* packages and not using this "£10m/yr over a decade equals £100m. We can't have *any* one on the bid list with a net worth of less than £100m" b***cks.
Anyone remember NIRS II? "We'll be *so* much cheaper as it's the first client/server social security system in the *world*" There was a reason *everyone* else bid a mainframe solution. R(very big system) x R(never used implementation strategy) x R( unexpected problems) = probability(of-massive-clusterf**k.)
Just how *insane* does that sound to the average person in the street (or *on* the street given the cost that electricity prices will have reached).
And to deliver a generating capacity < 1/2 the *current* level.
The only "good" part of this study is the recognition that to make *any* unreliable renewable energy system viable you either need *huge* storage* or a *global* power grid, given that all of Europe can be becalmed for *days* with zero power generation.
Overall thumbs down. Bad and stupid.
*but* if I were a betting life form I'd not that volcanoes turn up all over the solar system. Free water less so (on this evidence *alone* however).
So whenever this clay turns up the assumption is no longer *safe* that there is water around (you just have to find it).
Otherwise some first rate jokes and trolls.
You could go with a Halback array passive system
http://en.wikipedia.org/wiki/Halbach_array
This cancels *exterior* magnetic force (IE one side) and doubles that of the other.
IIRC a low rolling resistance aircraft trolley was one of the first applications suggested for this. Above something like 10m/s the trolley becomes self levitating
.
The other key *potential* enabler of this was the "sub terrene" technique of melting rock to form the tunnel wall *without* using a lining.
Regenerative braking should help a lot but you have to inject a certain amount of the system to *start* with (and there will be losses to be topped up).
I've always liked the concept. but you'd have to build a constant depth tunnel system to do it.
That said the velocity is set by a) ability to endure acceleration to cruise velocity and b) how much energy you're prepared to put into the capsule.
But M2 would be easy, M3+ definitely possible.
The *rate* gets bigger as the amount gets bigger.
Like the greenhouse effect the *basic* physics of it have been known for decades.
I've always thought a layer of ping pong balls would float on the water and return the reflectivity to allow refreezing.
The question is how many *other* non linear effects are at play.
So 2% of the *whole* genome encodes *all* the working bits?
That means the organism is devoting 50x more than is necessary to it's DNA copying/checking/repairing budget.
This suggests over generations (Say a million years) individuals who did not "waste" all that energy would out compete others. There are only 2 possible conclusions.
a) The energy budget for this process as a % of cell metabolism is *so* small even being 50x too inefficient is not real burden
b) The other 98% (somehow) *justifies* it's being retained by being *useful*.
That would be what we laymen call a *clue*.
BTW a DNA codon is a 6 bit code which codes for 20 proteins. The *logical* design would assign 2-3 codes to *every* protein. Except it assigns 1 code to a start-of-protein message and 3 to an end-of-protein, neatly leaving 60 codes to give 3 options for each amino acid *except* that's not what happens and i n1 case the amino acid tryptophan is coded by *one* codon, so any mutation at this point *guarantees* a change in outcome.
The view of the DNA/RNA/Ribosome as a passive data storage system has been obsolete for some time. It's looking more like a database with embedded rules being actioned depending on modifiers.
OH I agree the idea *sounds* nuts
But then so does the Casimir effect (also an effect of the quantum vacuum) and that is both real and measured. Quantum entanglement still seems like voodoo to me yet people appear to be gearing up for it to be the SoA in high security data transmission.
Dr White at NASA seems to be making progress on them. I would not have described the papers I've seen on the subject as "high school level."
This was just a brief look to the future. Most of my post was about how Voyager does what it does and *keeps* doing it after 35 years.
Well IIRC these are *serial* processors (either true 1 bit at a time or 1 4bit nybble at a time) in MSI CMOS.
Also the *first* time NASA felt brave enough to make the RAM out of CMOS as well (previous vehicles used the rock solid and rock heavy core memory) wired *directly* to the RTG output
All clocked at a brisk 4Khz. That is not a typo and yes that is less than the usual clock on either a digital watch or a pocket calculator.
As for the tape recorders these are not the usual reel to reel type. The data is written in 1 direction and read *backwards* to be sorted out once received back on Earth. New Horizons mission to Pluto will be running at about 1kbs at that range so the data rate from Voyager is even lower.
The Reed Solomon codes used to encode the data are a cousin of the ones used to cope with missing blocks on CD's. Different failure modes but a similar effect.
Monitoring space probes, identifying faults (often from *very* limited telemetry channel data) and devising work rounds is very demanding NASA has developed substantial AI tools to do this. It can't hold a conversation but it can spot when things are out of whack, identify a list of what the causes *might* be and suggest work arounds IIRC.
As for future probes. *if* the NASA work on "Quantum Vacuum Plasma Thrusters" work out we may be seeing the first *real* reactionless propulsion system. IE no *propellant* required.
Which means with enough power and time *any* spot in the Solar System becomes potentially visitable by humans.
"So a gaggle of self important directors sitting in a room with the chaps from GCHQ might make both sides feel big and important, "
It's a con-sultancy sale.
Make sure you're at the top of their brains when something (and I think it's a pretty safe bet amongst this lot that something *will*) happens they can say "I know, we need to get GCHQ in."
Of course the fact that the CEO's demanding to be able to plug his iPad/Kindle/Expensive camera/whatever into the network is the *cause* of the latest f***up will not be mentioned.
Cynical. Hell yeah.
"Also, the Holland dyke building took place over 500 years while Bangladesh may need to implement mitigation measures over a much shorter time scale. Finally, while Holland's dyke building was prompted by a combination of weather mitigation AND arable land expansion, Bangladesh currently seems to be facing both weather mitigation AND current arable land LOSS."
Which would suggest that Bangladesh has *greater* incentives to implement a mitigation strategy than Holland had.
The technology to do so has also moved on a bit since the 15th century, although with *enough* humans properly organized you can probably do the job the way the Dutch did it. But in the West you may have noticed you don't see many people digging up roads anymore. Just 1 guy and his mini digger.
The question is does the *will* exist to do so?
"The oil and mineral supplies aren't there to allow a much higher population to use 7 times as many per capita)."
Well the *oil* might not be there but that's a *very* narrow view of the global energy resources available to humanity.
As for resources well are you assuming that there is no *recycling* of resources and everything ends up in a global landfill? In which case you will capturing that Methane for power, won't you? It's unlikely to be more than 10MW a site but every little helps.
electric jet....
what does he mean?
"jet" like "scanner" is a word with *many* possible meanings. They refer to a *result*, not a cause.
"Ion engines" for example, produce a jet as well, but it's not a gas turbine, which is probably what most people are talking about.
For something more direct NASA did look at a *direct* electric thruster. This took GH2 and fed it to a 30Kw tungsten filament heating element (yes it's basically a giant light bulb element). IIRC it was part of their nuclear rocket engine work for attitude control and had a thrust of < 100lb.
You'll need a bit more thrust to get vertical takeoff.
It's usually called a "more electric aircraft" and it's electricity for replacing the *hydraulic* systems on board with electric actuation, eliminating *lots* of high pressure (c 4000 psi) tubing, issues with leaks, issues with it absorbing water, aid bubbles etc.
Front runners for the generator system are miniature un-lubricated gas turbine systems with direct coupled generators and fuel cell systems.
No one is planning on replacing the high bypass gas turbines hanging under the wings *anytime* soon.
According to Mark Hempsell (of Reaction Engines) US citizens (I'm fairly sure Musk is one now) risk *arrest* under the ITAR rules as they are funding a (potentially) military technology.
And courtesy of shrub *all* space technology is viewed in that way, not just solid fuel, reentry nosecones or space nuclear engineering.
Musk is not putting on an Orange jumpsuit anytime soon.
It's called a "Sense of humor"
And he's quite adept at thinking on his feet when using it. Anyone remember the question about being a Bond villain and his quip that "I don't have any collarless jackets in my wardrobe"?
He may also be aware of the story of HH sinking a *lot* of money (for the time) into a SoA steam powered car. Logically quite reasonable (team found a way to solve most problems after a fashion) but practically completely bonkers.
I suspect this is more of a ribbing than a serious idea and (as a graduate physicist) he'd know just how *much* energy such a machine would need.
There is a reason why you cannot buy a high *thrust* ion drive for rockets.
"I can't think of any instances where pure automation on its own has resulted in a crash"
IIRC there was an early Airbus which was flying at an air show. I *think* it was flying a low altitude high angle of attack and the pilot got in trouble. He went to an emergency pull up but fly-by-wire says no.
Made a nasty mess over the landscape.
I'm sure there have been a couple of other Airbus incidents over the years where they've been at the boundary of their normal flight envelope and things have not ended well.
This is a variable geometry wing *without* variable geometry.
Which is *very* clever.
The *key* worry about VG is mechanical failure locking the wings in the supersonic mode and making landing impossible. Part of why Concorde flew and the Boeing thing did not. Civilian operators are *very* twitchy about active lift systems (VG, blown flaps etc) because despite *huge* benefits (blown flaps reduced the wing size of the Buccaneer wing by 50%) there is no *backup* if it fails during liftoff/landing.
Caveats.
The 100 passenger size for Concorde was *borderline* in the 1960s. IIRC current thinking is 300+ or forget about it. You know this was done by a theoretical aerodynamics guy.
"Rotating engines is routine." Outside of Barnes Wallis's "Swallow" concept name one. AFAIK all *real* VG aircraft (XB-70, F-111, F-14) have flown with engines in (or under) the *fuselage*. The closest seems to be the Fairly Rotordyne
http://en.wikipedia.org/wiki/Fairey_Rotodyne
So doable, but not common. A 90deg rotation with cabling is fairly straightforward.
As for "optical devices" err periscopes were quite popular int he M3 designs of the 1950's but I hear fly by wire is quite popular (along with remote cameras) these days and commercial aircraft are cleared for instrument landing and "autoland" systems have existed since the early 1960's.
There are a number of ways to do fuel efficient large scale supersonic flight (the asymmetric flying wing being probably the simplest but *most* counter intuitive) but while they work on paper aerospace companies have just refused to try them. Boeing seem to be having trouble just getting the blended wing concept accepted.
Which leaves the question of how exactly this thing will dock at the departure lounge?
I especially like "?" as shorthand for the what-you-expect-them-to-give-good-reception-everywhere-and-decent-data-rates whine that some PFY behind the counter at a phone shop is indicating by that helpless/useless shrug.
Nut "something sometime" is a bit too honest for an *actual* marketing campaign.
However "Anything anywhere" might be what *they* themselves would prefer.
"Fact Check. The Van Allen belts themselves are MUCH more dangerous than normal deepspace. Apollo astronauts were expected to stay in the command module for most of the trip."
Absolutely. It is also a fact all Apollo astronauts took a calculated *substantial* risk that no major solar flare would hit them to or from the Moon (and since all trips were taken during a lunar day you'd get hit badly on the Moon in the LM).
I think the trouble is there has been *relatively* little surveying of the Van Allen belts over the years. I'm not sure if there have ever been more than one satellite on orbit at one time.
This sort of information in interesting in it's own right but could also help with making low thrust maneuvers more viable. This would be a *major* benefit to anyone launching a comm sat with a station keeping ion engine. Avoiding the need for an Apogee Kick Motor (roughly 25-33% of payload weight) would either mean a smaller launch vehicle ($ saved) or more station keeping propellant and/or more transponders -> bigger $$.
And it might help re-size the radiation shielding on the NASA MPCV