* Posts by Jason Ozolins

102 publicly visible posts • joined 19 Nov 2009

Page:

HPE CEO Meg Whitman QUITS, MAN! Neri to replace chief exec in Feb

Jason Ozolins

Re: Whitman did the right thing

Leaving aside the false dichotomy that you have to either like huge CEO:employee wage rations, or be a fan of Soviet dictatorships, was America less capitalist in the 1950s, when the ratio was 20:1 instead of 200:1?

https://en.wikipedia.org/wiki/Wage_ratio

Lock up your top-of-racks, says Cisco, there's a bug in the USB code

Jason Ozolins

Re: Not the biggest threat

Covertly setting up eavesdropping on installed gear, or installing backdoors during maintenance or equipment delivery... these are both things that happen:

https://www.techdirt.com/articles/20140124/10564825981/nsa-interception-action-tor-developers-computer-gets-mysteriously-re-routed-to-virginia.shtml

Jason Ozolins

Re: Re anonymous coward

Reading the data may not be necessary to hit the problem - from the description it sounds more like a vulnerability in either the USB protocol code or the filesystem driver, so the exploit would involve a USB stick with customised firmware on its microcontroller, or a custom filesystem creator to write wacky stuff into filesystem metadata.

Yay, more 'STEM' grads! You're using your maths degree to do ... what?

Jason Ozolins

"Back of the envelope" calculation and systems administration...

...go hand in hand. Mental arithmetic has often helped me spot clues that have led to the root cause of a problem. Big-O estimates helps me choose sensible approaches for solving problems at scale.

Sure, I use a calculator when I want exact figures or to do hairy stuff. But I'd hate to lose the intuition that mental arithmetic allows you.

Think Fortran, assembly language programming is boring and useless? Tell that to the NASA Voyager team

Jason Ozolins

Re: For which chipset?

Yeah, the 74181 is a 4-bit ALU on a chip, and NASA mentions using TTL 4-bit parallel logic in the chapter linked in the first comment, processing 18-bit words in 5 cycles, as a significant advance over bit-serial ALUs. Fewer wires, fewer packages, less power, but less speed than a full parallel ALU.

FWIW, DEC sold a bit-serial ALU version of their PDP/8 at a fifth of the price of the full 12-bit unit, so this was a strategy pursued even outside the limits imposed by space engineering.

In this chapter about Galileo [http://history.nasa.gov/computers/Ch6-3.html] there is mention of 2901 series 4-bit slice ALUs being used in parallel to make a full 16-bit ALU PDP-11/23 equivalent machine with a fully customizable instruction set. Then NASA found that the processors were not sufficiently radiation hardened to survive the conditions found around Jupiter, and had to pay Sandia $5M to fabricate special versions of the chips that could survive being blatted by high-energy particles. If the spacecraft had not been delayed they would not yet have discovered the radiation problems before launch...

NBN cost blows out by at least AU$10bn and FTTN isn't launched yet

Jason Ozolins

As someone who was in a rural area from 2000-2015, was never able to get ADSL on the oversubscribed, overlong, incredibly unreliable Telstra copper, and had a choice of exactly one semi-affordable terrestrial wireless ISP with its own nightmarish traffic engineering (8000ms pings!) and reliability problems, I'll just say that this government's reinstatement of Telstra copper as the key infrastructure in the NBN is incredibly galling.

Telstra took a pile of government funding to upgrade an area some 40km south of Canberra to ADSL... except for the premises where it didn't... and now that the area's exchange's DSLAM capacity is fully subscribed, the backhaul from the local exchange is so contended as to make ADSL seem like dialup in the evenings. Good one, guys. Best Telstra could manage last year was to send out someone from BigPond to attend a community meeting, who just repeated the mantra that due to structural separation, she had no control over any of the capacity and reliability related complaints that the residents had about the wholesale DSL service that BigPond resold. Basically she just smiled and nodded and made excuses. Hilarious. Meanwhile no competitor ISP had any way to sell any usable services into an area that Telstra Wholesale showed no interest in provisioning adequately.

Australia had a chance to finally structurally separate Telstra by changing the infrastructure base. Instead, the same old mystery midden of copper cabling, and Telstra's contract to maintain it, will be with us for decades. What a debacle. Oh well... since early 2015 I've been living in a suburb with NBN FTTP, getting a 100Mbit service. The difference from what I've had for the past 15 years is frankly incredible.

729 teraflops, 71,000-core Super cost just US$5,500 to build

Jason Ozolins

Re: Maybe the UK's Met Office should use it.

Yup. People where I work are looking at how to remove an up to 20% slowdown in large (>512 core) Unified Model runs that looks to be down to the rate at which the batch management system causes context switches on each node to track memory and disk usage. Never mind that the batch system doesn't use much CPU overall - the communication is so tightly coupled with the computation that a single slow process can make many other processes wait (and waste power/time). This is with all inter-node communication going over 56Gb Infiniband. Not very suitable code to run on loosely coupled cloud nodes.

(BTW, this is an example of an HPC job where turning on HyperThreading helps, as long as you only use one thread on each core for your compute job; the other hyperthreads get used to run OS/daemon/async comms stuff without causing context switches on the compute job threads. The observed performance hit from batch system accounting and other daemons is much lower with HT enabled.)

Anyway, this WD workload sounds very much like the sort of thing companies have long farmed out to their engineering workstations overnight using HTCondor:

http://en.wikipedia.org/wiki/HTCondor

If it can run well under Condor, it'd run just fine in the cloud...

Whomp, there it is: Seagate demos Kinetic disk drive

Jason Ozolins

Re: I mst be missing something...

Yeah, an application can write an object, key and other metadata straight to a Kinetic drive over a TCP connection, and retrieve it directly from the drive. Think Amazon S3 instead of a piece of block storage.

Making that scale is a simple matter of partitioning drives so that different tenants' key spaces are distinct, managing tenant authentication and space allocation on all the drives, locating drives with free space, keeping track of which drives you wrote copies of any object to, and notifying client applications when drives die so that they can do the needed replication from existing copies to re-establish the correct number and placement of replicas.

Ehem. So, *don't* think Amazon S3, actually... because if you buy Kinetics, you have to implement most of what S3 does in the layer above the Kinetic drives. If you were planning to do that anyway though, hey, Kinetic drives might be a decent fit. You could probably achieve much the same with AoE drives by moving a bit more intelligence into the layer above the Kinetic drives. Shingled recording means that the Kinetic drive is treating each shingle zone a bit like a huge flash erase zone, with objects getting appended to a shingle zone and then garbage collected to other zones when the fraction of live data in a full zone drops too low. There's no inherent reason those zone allocations needs to be tracked on each drive rather than on a server.

Cray-cray Met Office spaffs £97m on very average HPC box

Jason Ozolins

Re: Good hardware but why not a real operating system? @AC

Cray XCs don't share address space across large numbers of sockets like SGI UV does. The XC40 has two-socket Xeon compute nodes, and each node runs its own OS instance.

Limits to effective scalability of commodity x86 servers with off the shelf IB vary, depending on the workloads you want to support, as well as which vendor you care to ask. =:^)

Jason Ozolins

It really depends on the workload. Running ensembles doesn't require tight coupling between different parts of the ensemble. If an hourly forecast takes more than an hour to produce, then you will be pipelining them, but each run will not be coupled tightly with the preceding or following run, so you can use the two sites to increase your overall job throughput.

There would still be dependencies on shared filesystems, but propagation delay between campuses would not be nearly as big a problem there as for message passing within communication-heavy jobs.

White LED lies: It's great, but Nobel physics prize-winning great?

Jason Ozolins

Glare Emitting Diodes

Spot on - I live out of town, have to walk significant distances around fields at night, and use LED torches for their good battery life, but *damn* the world look awful when lit by an LED torch.

It really stuns me how if I turn the torch to my face it is utterly dazzling, but when I am looking for something out in the paddock it feels as if the light is a fraction of the brightness of the 14.4V tungsten filament torch that I got with my cordless drill. LEDs are great for glare, less so for *vision*.

I've had similar experiences with so-called 'warm white' CFLs from GE, so I just stick with Phillips CFLs as they seem to have decent taste in phosphors.

A SCORCHIO fatboy SSD: Samsung SSD850 PRO 3D V-NAND

Jason Ozolins
Linux

Is Windows caching really that crap that...

...you need a third party tool like RAPID to stash away bits of your filesystem in otherwise-unused RAM? This is default behaviour under Linux. A lot of the time, I'm working on machines that report many gigs of cached filesystem data, without any special software, and filesystem operations that hit in the cache zip along nicely.

The only gotcha I've found with the Linux caching behaviour is that typical distro default settings for 'vm.swappiness' can cache filesystem data too aggressively to preserve low latencies for interactive/soft-real-time workloads. If you do heavy filesystem activity and, say, low request rate web serving on the same box, the filesystem churn can cause inactive executable pages to get dropped, which means that the next HTTP GET causes a flurry of reads to bring the executable back into memory. Seems like the people who set the default caching pressure care more about artificial throughput benchmarks than predictable response times.

Cleversafe CEO: We would tell you about the 8TB drive, but...

Jason Ozolins

Re: Tech details

A quote attributed to the late Jim Gray: 'Disk is the new tape'. Read and write sizes should be going up to at least maintain the ratio of time spent reading/writing to time spent in seeking as disks get denser. Seagate don't quote seek specs for their 6TB drive, but if you guess at 8ms for a random seek, plus 4ms average rotational latency, you'd like to spend a decent amount of time transferring data in return for that 12ms of latency.

Sat the media transfer rate is around 200MB/sec; then that's 1.6MB going by in one revolution. If you have these drives in, say, an 8+2 RAID-6 group, then multiply by 8 data disks and that's ~13MB you need to read in order to get one revolution (==8.3mS) worth of throughput from all the disks before moving the heads to the next I/O.

This was a real concern for a system I worked on where virtual tape RAID sets were being read in multiple sequential streams by our SAM-QFS HSM, but each read was only 1MB in size. For an 8+2 RAID set, that's only 128KB of data transferred per spindle. The disks spent most of their time seeking instead of reading when there were a few streams running to a single RAID set. I partially fixed this by patching the HSM stage-in binary to read in bigger chunks (I had the source to know what to patch, but it wasn't buildable from that source)... but most of the improvement came from controlling the file recall workload so as not to read files concurrently from a given virtual tape.

Intel unleashed octo-core speed demon for the power-crazed crowd

Jason Ozolins

Re: Looking at the over clocking...

Pretty sure all the Ivy Bridge chips (and the earlier Haswell mainstream chips) used Thermal Interface Material (i.e., some rubbery heatsink-paste-like goop) between the CPU die and the heat spreader. This was remarked on at the time as being worse for overclocking, as it was less efficient at shifting heat from the die to the heat spreader than the preceding models. As you might expect, some brave/foolish folks went so far as to *remove* the heat spreader and upgrade the TIM, all in the name of lulz and extra framez per secondz:

http://www.overclock.net/t/1397672/deliding-a-4770k-haswell-improving-temperatures-and-maximizing-overclockablity

But these new, Xeon-derived parts seem to have a better (or, at least, *different*) coupling between the CPU die and the heatspreader:

http://www.ocdrift.com/intel-core-i7-5960x-de-lidded-haswell-e-uses-soldered-thermal-interface-material-tim/

This suggests that less dedicated overclockers may still be able to get good results from these chips without the surgery required for the previous K models. Indeed, pulling off the lid looks to have completely destroyed the 5960x CPU featured in the link above, so it's just as well really.

NBN Co adds apartments to FTTP rollout

Jason Ozolins

Re: Failure of vision

"The right way to do this would be for the government to provide subsidies and tax inducements to encourage telcos to cable out to hard to reach properties."

They did that. It was called the Higher Bandwidth Incentive Scheme, and then there was the later Broadband Connect scheme. Apparently the telcos were not sufficiently encouraged - I have no access to any wired broadband, and am using the only non-3G, non-satellite wireless provider that barely manages to scrape a living on the awkward rump of properties left after Telstra cherry-picked all the easy ADSL subscribers in my area.

That wireless service is unreliable (as in, regularly drops out multiple times per evening, or is unavailable for up to multiple days at a time) and often slow when I want to use it (because they can't afford a decent amount of upstream bandwidth due to their small size). It does make it easy for me to brush off Telstra sales people when they call up trying to get me to bundle my services, as I just tell them that Telstra failed for years to give me the ADSL that I repeatedly asked for and would have been prepared to bundle, and could they kindly tell their employer that they long ago destroyed any trust or goodwill I felt towards that company. As far as I can see, privatising Telstra had the grand result of transforming them from a "slow and unresponsive monopoly" to a brazen and cynical monopoly. Woohoo.

The actual right way to do this would have been to structurally separate the monopoly local loop from the rest of Telstra. But thanks to Howard/Costello's vision for telecommunications extending only so far as making their government look good by paying some debt off really quickly, we've got a monopoly carrier that for years treated crappy landline service as a pressure for its customers towards higher-margin mobile products, a copper network showing the effects of a decade of desultory maintenance, and now a government that has committed to that same copper as our broadband future. A commitment they made without any credible evidence about how much is still fully functional, or about how much Telstra will want in return for access to that network now its value to the government has suddenly increased.

'Stralia: rhymes with failure.

You didn't get the MeMO? Asus Pad 7 Android tab is ... not bad

Jason Ozolins

Re: I would steer clear of anything that's not ARM

For a few years Android devs have had the option of coding performance-critical parts of their code to the bare metal instruction set with the Native Development Kit:

http://en.wikipedia.org/wiki/Android_software_development#Native_development_kit

In that case, a new tablet architecture might not ever get support for an NDK app that is no longer in active development.

Oh SNAP! Old-school '80s Unix hack to smack OSX, iOS, Red Hat?

Jason Ozolins
Devil

Re: You've made be rant now..

I think your early points are great, but you lost me starting at...

"Indeed, there are many who argue that kernels should not allow files to exist which start with a '-', or contain spaces, newlines, tabs, various binary characters etc..."

My view is that if I'm the sysadmin for a multiuser system, it's *my* prerogative to prevent silly filenames creation by the users. It should *not* be a kernel default; but a filesystem mount option to reject open/creat/mknod/link/symlink/rename operations where the target filename contains characters from \001 to \037 would be entirely appropriate and save lots of user confusion when they create such problem files by accident. This is fine for UTF-8 encoding and EUC coding.

...And if my users want to store data against arbitrary binary keys using 'special' C programs to make 'special' filenames, I'll tell them: Don't use a filename as the key, because it's a half-arsed hack. Instead, here you go, sqlite3 or gdbm or bdb, take your pick, they *do this stuff for you*. Oh, by the way, you can *even* use data containing '/' and ASCII NUL as a key. Whoa!!!!

The traditional "woo, anything goes except '/' and \0!" boast is making a virtue out of what likely started as laziness on the part of the kernel programmers. Laziness which probably made perfect sense for the times and the Bell CSRG's use cases. These days, adding an extra "check character code is greater than 32" to the kernel path parsing is not such a burden. It will branch predict correctly almost all the time.

UNIX got some things really right, but some of what the early designers chose not to care about has turned out later to cause problems for scaling and security. What made sense for the use cases and developer resources of a CS research lab in the early '70s is not necessarily appropriate now. Robust filesystems with synchronousness guarantees, race-free file syscalls and other niceties all came about because people recognised the need to take UNIX beyond what Ken and Dennis first envisaged. No slight to the inventors, just progress.

Jason Ozolins

Re: The paper is interesting, but wrong!

Does '-' collate before alphabetical chars in *all* Euro locales?

But yeah, if the '--rf' *did* sort to the end of the wildcard expansion, and POSIXLY_CORRECT was set, then getopt(3) would stop scanning for options once it hit the first non-option argument. So in this case the baddies are relying on GNU's super-helpful default getopt(3) behaviour to work their evil.

Fibre to YOUR premises NBN still on table pending telco talks

Jason Ozolins

Sigh... these points would indeed be damning criticisms if there was any good evidence they actually wanted the NBN to succeed. Their attitudes to it progressed more or less from outright mockery to witless objections to grudging skepticism to bold claims they could do just as well yet quicker and cheaper through the cunning strategy of moping wistfully at Telstra until it cut them a sweet deal on some suddenly valuable infrastructure.

They don't want it it make money. They want voters to get bored and disillusioned from the wait, and then let it die.

Unisys cozies closer to Intel, 'sunsets' proprietary processor

Jason Ozolins

Re: Emulation?

Sperry Rand was already running up against this in the 1970s, when they wanted to use commodity 2's complement ALU chips to build their crusty 1's complement CPU:

http://www.google.com.au/patents/US4099248

Unisys could just have called their x86 processors "high speed vertical microcode engines"... a long time back on comp.arch, a poster was lamenting how new [RISC? Memory fails me...] CPUs never featured Writable Control Store, to which an old hand replied "They do. It's called L1 cache."

Should NBN Co squeeze a server into FTTN nodes?

Jason Ozolins

Analogy Ambiguity

" That's a bit scary because we've been told FTTN deployments will be going like a train any month now."

If we lived in a country where trains were associated with speed and reliability, it would be a bit scary for the right reasons. Instead, it's a lot scary for all the wrong reasons.

600 school sysadmins sacked in New South Wales

Jason Ozolins

The chaplains can spend their time making the teaching staff feel less depressed by the stress of their second job as unpaid, undertrained sysadmins.

NBN Co in 'broadband kit we tested worked' STUNNER

Jason Ozolins
Meh

100Mb/sec at 100 metres - vectored???

Ever since the LNP decided not to kill the NBN in favour of somehow spending the money they wouldn't save on flood relief, infrastructure or bribing the very rich into bearing children, they've been really hot on the idea of vectored VDSL replacing FTTP.

Turnbull's big claim was that vectored VDSL would allow punters to get up to 100Mb/sec out of their waterlogged copper pairs, so they'll be really stoked to see NBNCo getting these numbers out in front of voters. But vectoring is done on the node end to reduce the effects of crosstalk, and it's not hard to see how a single active VDSL pair is not going to see a lot of crosstalk from VDSL traffic on adjacent pairs.

It would be a lot more reassuring to see a successful demonstration of multiple active pairs running closer to the maximum specified distance between a node and end user premises.

IBM was wrong to force UK workers off final salary pensions – judge

Jason Ozolins
Devil

Re: Now how many more years...

Just read that Australia has 35% of its over 65s living in poverty. Housing is extremely expensive, and population pressure means that it's not a property bubble so much as an endless shortage. Unless you own your dwelling by the time you retire, you will be hard pushed to pay the rents that are being asked. The new conservative Oz govt also just foreshadowed pension age rising to 70. They should snap up some of the urban brownfields sites as manufacturing industry leaves due to the high cost base and exchange rate. That way they'd have some appropriate land to build new workhouses.

Personally, unless I am lucky enough to have a fantastic job by age 60, I will do everything in my power not to be compelled to slog away for the last years of my life where I have a decent chance of good health and mental capability. I don't need to go on endless overseas trips like some Australian baby boomers seem obsessed with, I just want to spend a few years of decent physical and mental health actually living my own life on my own terms for a change, and I am prepared to live in a very simple way to achieve that. Swapping those ten good years for ten more crappy years of feeble ill health at the end of my life is a worthless bargain. If I later start to run out of money or health or brain, I'll already have the requisite resources for the next step... from Exit International.

Audio fans, prepare yourself for the Second Coming ... of Blu-ray

Jason Ozolins
Coat

Re: Not Engineering

That article did point out, though, that the NS10's port-less design made for a tighter transient response than other similarly positioned, better-spec'd monitors. That it so often gets described as "brutally unflattering" or similar also suggests that they were considerably more clinical than an average home speaker.

Coat icon is for after I admit that I've only got some old Behringers... I'm not exactly high up the audio food chain.

No, we're not in an IT 'stockapoclyse' – boom (and bust) is exactly what tech world needs

Jason Ozolins
Devil

Efficient Markets Hypothesis? It's not a hypothesis unless...

...it's falsifiable. As in, it is possible to describe a state of events that, should they be seen to actually occur, would falsify the hypothesis. It's a bit of rigor that adherents to the Dismal Science ought to embrace.

Tested (and as-yet unfalsified) hypotheses eventually get accepted as theories. But the way that free marketeers toss the EMH around, you'd think it ought to be called the Efficient Markets Axiom.

Anatomy of OpenSSL's Heartbleed: Just four bytes trigger horror bug

Jason Ozolins
Facepalm

Workaround: Clients could refuse to connect to vulnerable websites

Surely if the client SSL library was altered to try this exploit once during certificate exchange, the client could drop the connection if anything extra was returned in the heartbeat request. It's a heuristic thing - the larger the "exploit" request size, the easier it would be for the client to tell that the server was unpatched - but it is at least *something* that could be done at the client end to catch connections to insecure servers.

Major tech execs fling cash at heretical AI company Vicarious

Jason Ozolins

"Recursive Cortical Networks"

...is the model that Vicarious have publicly claimed to be working on. I wonder whether it's got any relationship to Confabulation Theory?

http://www.amazon.com/Confabulation-Theory-The-Mechanism-Thought/dp/3540496033

Confabulation theory was the first account I came across which seemed plausible from an evolutionary viewpoint - it posits that the same basic architectures that evolved for coordinating muscle movement were then specialized in evolving higher cognitive functions - and in light of the observation that comparatively few layers [think "gate delays" as an EE analogy] of processing can take place between our sensory input and our reactions if we are to react in a usefully short time.

In any case, with 5 out of 8 of their team members

http://vicarious.com/team.html

sporting PhDs, and the other three also sounding like overachieving types, it looks like it would be a very interesting place to work.

Intel promises 10Gb Ethernet with Thunderbolt 2.0

Jason Ozolins

Re: Not very impressive...

Just as politics is the art of the possible, business is the art of the profitable. Thunderbolt gear is already more expensive than USB 3.0, even with "shonky" copper. Re-using Displayport electronics to drive short length copper cables reduced the cost to vaguely acceptable levels, and also allowed backward compatible re-use of the mini-DisplayPort connector on laptops with very little space for connectors.

There are cases when copper just *wins* from a cost point of view - my workplace has 3500 x86 servers with short passive copper QSFP+ 56Gb/s cables going up to their top-of-rack Infiniband switches. From those switches to the core switches the cables are optical, with active QSFP+ end connectors, and those cables are very expensive. Similarly, racks of servers on 10Gb Ethernet often use passive copper SFP+ cables to get to top of rack switches, with multimode fibre SFP+s back to core switches, if there is no need for full bandwidth from the whole rack back to the core switch.

Anyway, DisplayPort was already quite a cool interface - I've upgraded to a new work Mac and am using my old iMac 27" as secondary monitor; it's a great way to get more life from a nice screen.

Neil Young touts MP3 player that's no Piece of Crap

Jason Ozolins

I cut my hair with a set of Wahl clippers and combs, and have not paid for a haircut since about 1997. It doesn't take long before you can do the back of your head without having to hold a hand mirror...

Jason Ozolins

Re: PoC

A few years ago, I remember being quite surprised that the highest bit rate MP3 encoder supplied in iTunes made such a hash of a big booming reverb effect that I could clearly hear the difference from the original, despite my hearing already starting to go to crap. I was pretty familiar with the original track. [Movement in Still Life, UK version - BT is pretty obsessive about his recording technique, FWIW]

So, yeah, MP3 - depends on the developer's commitment to the format. And the program content - distortion-laden guitars (Neil Young, perchance?) are actually really challenging to compress well with perceptual coding, because there's energy *all over the spectrum*, not in neat peaks like for many acoustic instruments. Not sure if your "source is of good quality" proviso was meant to apply in that case... it certainly makes the snobby "give classical stations higher bit rates because golden ears" decisions for BBC DAB radio seem even sillier.

http://en.wikipedia.org/wiki/BBC_National_DAB

ARM lays down law to end Wild West of chip design: New standard for server SoCs touted

Jason Ozolins

Re: Color me unconvinced

The number of ARM processors shipped vastly outstrips the total number of x86 processors shipped in the same time. I guess that wasn't one of the many broken promises. It would help if you gave some detail on who promised what.

A RISC versus CISC debate, absent any engineering or business considerations, is about as deep and thrilling a dispute as hatchbacks versus sedans, without reference to any real cars. Most of the interesting differences are between particular models (ISAs), not the abstract classes of car (architectural style).

It happens that the Pentium Pro and its many evolutionary descendants decode the more complicated x86 opcodes into RISC-ish uops internally. Seems to work okay for Intel.

The year when Google made TAPE cool again...

Jason Ozolins

I hadn't thought about DAT for a *long* time, but a quick check of the price for a 160GB native capacity "DAT-320" tape (even the product branding assumes all your data is 2:1 compressible) is somewhere around AUD$35, whereas a 1.5TB native capacity LTO-5 tape is around AUD$60. Looks like I wasn't missing much.

With those sorts of numbers, and an LTO-5 drive costing roughly AUD$2K to a DAT-320 drive at roughly AUD$1K, it's hard to see how DAT could compete against high-capacity tape for a bit more initial outlay, or a removable disk system for even less initial outlay... [but yes, there are some durability issues in that case.]

Jason Ozolins

Re: tape is cheap, portable, and fast transfer rate

It depends what your disaster recovery scenarios are. If you are storing data offsite because you need business continuity through a disaster that takes out your online data and primary backup, then you have to have a firm plan for how to get the data back intact from offsite tape within your DR window, onto enough surviving hardware to continue operations. If you are mostly storing a copy offsite to guarantee the survival of the archive itself - say, if the primary copy is physically far enough from your online data that it could be lost in a disaster without losing the actual online copy and servers - then you just need to plan how you'll re-replicate the archive at acceptable risk.

My workplace (nci.org.au) is progressively deploying petabytes of online research data storage. At that scale, tape backup has big power/cost/durability advantages, and so that's our chosen medium. But as our primary tape system has to live in our main data centre with the online storage, almost any DR scenario that requires our offsite data copy will necessarily involve significant lead time to buy in more disk and other hardware to replace the failed/destroyed equipment; there is not the budget for continuous availability of this kind of data at this scale through disaster scenarios. Even so, we are working on strategies for minimizing the time to restore from tape, with particular regard to the very wide range of file sizes within our online datasets. Restoring tiny files from tape at media rates requires a lot of metadata IOPS, and we are taking into account the access patterns typical for each dataset before deciding how it should be packaged for long-term storage and backup.

Junior telcos tie knot in NBN Co copper plan

Jason Ozolins

The market structure is exactly the same except that with FTTN, it is at the moment unclear under what arrangements the last mile medium will be tested, remediated and maintained, and how much of the market will actually be reliably served by FTTN. So it isn't really the same at all.

I'm not bagging FTTB: it sounds like a credible "least worst" option for MDUs and there is always scope for building copper to be renewed; but betting the farm on how well Telstra has maintained its last mile copper over the last ten years is "a brave move, Minister."

Bigger on the inside: WD’s Tardis-like Black² Dual Drive laptop disk

Jason Ozolins
Linux

Re: Linux support... well, who can say?

It couldn't be a SAS expander - you'd need a SAS controller in your laptop to make use of that. It could well be just a SATA port multiplier chip, as Marvell does make those:

http://www.marvell.com/storage/system-solutions/sata-port-multiplier.jsp

If so, it looks like it comes up in a transparent passthrough mode before the extra driver magic is added to the host. There doesn't look to be a publicly available detailed datasheet for their port multipliers, but it would be interesting to see if a decent SATA controller under a recent Linux kernel would detect the chip.

Tales from an expert witness: Lasers, guns and singing Santas

Jason Ozolins
Stop

Re: Couple of points

Guess what... it's a bit of both. I have Asperger's, so coordination and social skills weren't my strength. But I could sprint and do long jump, and I rode my bike 9km every school day for most of high school, so I was reasonably fit.

But it was hardly "character building" to have some smug arse of a PE teacher making suggestions about my sexuality when I couldn't cope with the mixed ballroom dancing unit we had to do each winter. Physical contact made me really stressed, particularly when some of the girls we had to partner were actively participating in the bullying that was making it hard for me to do well in the subjects I actually cared about. It was actually the wife of the head P.E. teacher, also a P.E. teacher, who actually took a moment to find out what was going on with me, and arranged for me to go to the library and do something useful, instead of standing outside the hall each lesson to satisfy the vindictive streak of my teacher. Thanks, Mrs Moore... and yes, I somehow managed to get married to a woman and have a family, despite my flamingly effeminate stance against ballroom dancing.

So yeah, I learned some things about character in P.E., but it was mainly about what sort of people I could trust in any way: not the ones who enjoyed causing suffering. Co-ordination is certainly important to your development as a well-rounded person... so just spend some time playing handball, tennis, frisbee and juggling to get over your "motor moronhood"; it's a lot more rewarding than being punched in the nuts in a rugby scrum by people who hate you.

Eat our dust, spinning rust: In 5 years, it'll be all flash all the time

Jason Ozolins

Re: The disks may go, but the blocks will remain

Blocks are indeed a convenient abstraction, but inside some SSDs they're already getting de-duped and compressed, so there are still possibilities for shifting the division of responsibility between the filesystem and the hardware. TRIM, wear levelling, read disturbance tracking, a raft of alignment hacks to deal with FAT32/MBR legacy brain damage - up until it bit MS when 4K sector HDDs arrived, and they finally abandoned the stupid lie that every disk has 255 heads, 63 sectors and partitions simply must start on a cylinder boundary - all of these are symptoms of the mismatch between a block storage model that tries to cope with any write pattern to arbitrary 512-byte blocks, and the physical realities of easy-to-kill larger programming pages arranged within erase blocks.

Some of this stuff can be handled just by exposing some basic geometry - what alignments and write sizes make sense for the underlying flash, for instance - but a copy-on-write filesystem like ZFS or Btrfs but more specifically aimed at flash, which controlled programming/erase policy, could go around the standard block model. For instance, filesystem defragmentation preening to free up contiguous space on HDDs could turn into a way of freeing erase blocks, and wear levelling falls out as a consequence of the copy-on-write nature of the filesystem.

A machine I worked on, http://en.wikipedia.org/wiki/Vayu_(computer_cluster), had 1500 blade servers each with 24GB of SLC flash SSD, as developed by Sun. The SSD write bandwidths would drop considerably over time, even with aligned 4KB write workloads for scratch storage and swap; there was no TRIM or secure erase support on these SSDs, but we worked out that every month or so we could do a whole-of-SSD blat with large, aligned writes to return each SSD to near its original write speed. Granted, this speaks to the maturity of the SSD firmware that was delivered in 2009 with this machine, but it seems to me that better documentation of the SSD and a better understanding of how the filesystems were hitting that block device could have helped us avoid that performance degradation in the first place.

So, yeah, the block abstraction is a useful one, but it's not without its warts.

Jason Ozolins
Meh

Speed (bandwidth)? Or acceleration (latency)?

Guess Amazon Glacier has no reason for existing then. After all, who would ever want to wait more than a few seconds to get their data back, even if it then arrives at a decent *speed*.

If your Internet commerce business model really does involve never knowing what (large) pieces of data your clients will instantly need from anywhere in your single-tier all-flash storage setup, I hope that they're paying well for the service...

Turnbull touts construction resumption in YouTube vid

Jason Ozolins
Meh

So the agreement with Telstra is coming when, exactly?

Whatever work is going on towards FTTN is presumably within the limits of the existing agreements between NBNCo and Telstra, which is to say using their ducts to run fibre (and now power, yes?) to cabinet sites, once those cabinet sites are chosen. Which is fine, until it comes time to trace, test and possibly remediate that last leg copper that David Thodey reckons is good for another hundred years... who knows, maybe he was thinking it could last another hundred years just satisfying the Universal Service Obligation. Dial-up modem, anyone?

The Coalition having waxed so very insistently on how FTTN was the only sensible approach for Australia (that is, after four years of variously insisting that it was unnecessary, that 4G wireless would make the NBN obsolete, and that the money could be better spent on flood relief or proper manly infrastructure like roads), what are the chances that Telstra will play hardball and shift a large part of the unknown cost of copper tracing, testing and remediation back onto NBNCo, while retaining actual ownership of the last leg copper? Pretty good, I'd say.

FTTN might as well stand for "Feed Telstra The NBN".

FACE IT: attempts to get Oz kids into IT jobs are FAILING

Jason Ozolins
Devil

Fancy stuff is pointless without the basics

The last factoid in the article is the real elephant in the room: basic skills are just not getting the priority they deserve in early education. I'm pretty worried about what my kids bring home from school. Last year, my 6yo was bringing home corrections to her spelling which were not actually correct themselves. "Trisicol" for tricycle, "verander" for verandah - from a teacher who claimed righteously to my wife that teaching was not just a job to her, but a vocation. This year, in year 2, my daughter is doing homework at an age where I and all my classmates had none, and yet I am just not convinced that the general standard of attainment in her class is any better than I saw at that stage of primary school...

Frankly, if you go into IT without a decent command of language, and of discrete maths, you will struggle to collaborate with people effectively, and to bring any rigor to problem solving processes. I'm not talking fancy University maths, just a decent secondary schooling foundation to get you used to a bit of abstract and systematic thought than you can build on as required.

Going on about some magic IT training that will turn kids into the IT workforce of the future is missing the point if the fundamentals are not properly addressed. It also helps to be interested in the subject, because as cracked points out, you will have to keep learning new stuff to stay useful...

Helium-filled disks lift off: You can't keep these 6TB beasts down

Jason Ozolins

Re: less helium than a balloon

Funny you should say "laws of economics" rather than "economic forces". Because, with the way that the various world economies have been going, it doesn't seem like we actually know the laws well enough yet - unless they are the sort of unfalsifiable laws where whatever happens, that's just what the laws said would happen. Funnier yet, there are psychological studies where students of economics prove to be less altruistic/fairness-minded, and more self-interested, than "ordinary" students in financial dealings... but supposedly the same laws of economics apply to both economists and lesser mortals.

Sure, profit motive will draw private companies to fill the gap after some amount of price flapping and pain among industrial and scientific users of helium. But was it really necessary for the US Government to get out of the helium marketplace in some ideological panic, lurching around smashing stuff on the way out? Oh well, after the US Government shutdown last month, that would seem to be totally par for the course.

Jason Ozolins

Maximum operating altitude?

Seagate states that most of their drives are designed for a maximum operating altitude of 10,000 feet. If the seals on these helium-filled drives hold at altitudes higher than 10,000 feet, these drives could operate in places where most Seagate drives are not warranted to work. Good for folks in Bolivia, for instance...

Malcolm Turnbull throws a bone to FTTP boosters

Jason Ozolins

Re: The devil is in the detail

Yes - nowhere have I seen any mention that the Coalition sought, or had, access to Telstra cable records that would let them make more than a wild guess as to how much copper would (in principle, at least) support VDSL.

Add to this the elephant in the room that is Telstra's level of commitment to proper maintenance of their copper network in the last decade. My mother's last house, build in the late '80s, had decent ADSL2 a couple of years ago, until a fault brought out a Telstra tech who, between complaints about how crappy his job had become, mentioned to my brother, "yeah, you probably won't have such good ADSL anymore". Whatever work he did on that line to get it working again, he wasn't lying.

The opinion of Telstra executives at a Senate hearing in 2003 was that their copper network was "five minutes to midnight", and they would only guarantee its function to 2018. An optimist might say that those executives lacked the vision to see that networking technology would eventually find ways to wring decent speeds out of a few hundred meters of copper; but the real question is whether that view of the copper network, and their focus on higher margin mobile services to drive revenue growth, led to such cost pressures that they effectively gave up on maintaining the copper to the standard where VDSL2 speeds were uniformly achievable.

For what it's worth, I see FTTB for multiple dwelling units as one place where it makes sense to use VDSL2. The copper runs are shorter, and hopefully the deployment could be done in such a way as to leave open the possibility of fibre retrofits back to the basement, for tenants who manage to get the body corporate to agree and are prepared to pay for the retrofit.

It's all in the fabric for the data centre network

Jason Ozolins
Meh

Infiniband, anyone?

Funny how Infiniband has been offering a switched fabric network with separated control and data planes for about a decade, at a price per port that for a long time was way lower than comparable Ethernet (once the Ethernet specs were even drawn up for the link speeds that IB was supporting). Plenty of supercomputers have had single IB connections from compute nodes to a converged data/storage fabric.

Not sure how much 40Gb Ethernet switches are going for, but considering that a basic unmanaged 8 port QDR (== 40Gb/sec signalling, 32Gb/sec data) switch can be had in the USA for less than $250/port, and a 36 port top-of-rack QDR switch with redundant power for about $140/port, I'd be surprised if there were such low entry points for Ethernet switches with comparable bandwidths and software defined networking capability. Even that tiny 8-port QDR switch can be connected into a mesh fabric, and toroidal IB networks with peer-to-peer links to adjacent and nearby racks can allow some degree of horizontal per-rack scaling for deployments growing from small beginnings that cannot justify more expensive core switching.

Granted, this last point is making a virtue of necessity, in that you pretty much *need* to run an Infiniband subnet manager on an external host once you get to a decent size network. The subnet manager that was supplied on an embedded management host with our modular Voltaire DDR IB switch was not much use, as it tended to lock up... it's easier to restart the subnet manager, or switch to a failover backup, if it's running on hosts you fully control. =:^/

Windows 8.1: Read this BEFORE updating - especially you, IT admins

Jason Ozolins

Re: Usual MS upgrade stuff then...

"Thirdly, wireless broadband is the future and on the basis of downloads of up to 40mps in parts of Australia is very much the present in part. Mind you 25mps is what the previous Government's broadband was slated to be in its first 3 years of full operation."

Whatever revelatory substance it is that they put in Alan Jones' coffee at the 2GB studios, apparently it is being served at your local cafe too.

If you take a second to look at your preferred NBN implementer's choice of technology, as released in the Coalition's NBN policy in April 2013, you will find that there is absolutely no mention of magic wireless that will replace wired deployments in metro areas. Ze-ro mention of magical unicorn+rainbow radio technology to serve city users, just 4G/LTE for rural areas, with lower contention ratios than are designed for city 4G deployments. That is because 4G/LTE, like all the wireless broadband technologies that came before, is subject to the laws of physics, which kind of tie you down to using a crapton of radio spectrum if you want to serve a lot of concurrent users in a given area.

This is why mobile telcos are actually clamping down hard on download limits. Telstra's rate card: http://www.telstra.com.au/broadband/mobile-broadband/plans/ - shows a breathtaking $95/mo for 15GB. But everyone knows *they're* a rip-off, [that was guaranteed by the monopoly status they inherited when the Coalition privatised them, cough]... so surely overseas it's all roses and endless video streaming over 4G? Okay, here's what Singtel has to offer for it's 4G mobile PC-oriented broadband plans:

http://info.singtel.com/business/products-and-services/internet/broadband-laptops-and-tablets

Mmmm, AUD$34 for 10GB of download, with excess data at ~ AUD$9/GB. That's the future, right there! (Assuming you meant the future to be just expensive, instead of very expensive.)

4G will work really well for the things businesspeople want to do when they're on the road. It will not be a magical replacement for wired broadband in metro areas. Nor will whatever follows it. Wired deployments have their own contention issues, but they actually exist in the other direction - there is much more total potential downstream bandwidth than you can afford to carry/switch upstream. But if the business case emerges, you can upgrade the backhaul or switching gear on your wired deployment after the fact; whereas 4G radio technology will stay pretty much set in stone - for a certain amount of spectrum, you'll get a fixed Gbit/sec of total usable bandwidth.

Meanwhile, over in the 2GB part of the collective delusion/bile tank that is Australian commercial talkback radio, Alan Jones will politely refrain from calling his Coalition pals idiots for not heeding the same sage advice about magic radios being the future of broadband that the Labor hacks were so stupid to ignore. Funny about that.

Object storage: The blob creeping from niche to mainstream

Jason Ozolins
Meh

Re: Object: It's not just about storing stuff...

Yes and no to the "still need a filesystem under the covers". You can pretty much chop the directory tree off a traditional UNIX filesystem and use inode numbers as object IDs. For instance, the Object Storage Targets inside Lustre can be mounted locally on the storage servers for maintenance when the cluster filesystem is down, and what you then see is a placeholder filename for each inode in use, hashed into a containing directory tree to keep the directory sizes manageable. When the cluster filesystem is mounted, most operations refer directly to the inodes - it's only when a new object is created that its placeholder filename has to be added too.

As for efficiency: if you expose inode numbers to the filesystem layer, bypassing the directory tree and addressing inodes directory is certainly no *slower*...

Microsoft Xbox One to be powered by ginormous system-on-chip

Jason Ozolins
Windows

Memory-mapped frame buffers, old as the hills

Video access to large address ranges of main memory has been around since long before the Amiga. For instance, the Atari 800 and the Commodore 64 - both those had memory-mapped frame buffers which could be set to read from most parts of the RAM.

The Atari 800 custom audio/video chips were IIRC designed by Jay Miner, who went on to design the custom chips in the Amiga. The Amiga had much more CPU memory also addressable by graphics hardware, and added a nifty DMA coprocessor that could do bit-oriented graphics operations over data stored in the 'chip' memory, as well as moving data around to feed the PCM audio channels and floppy controller... but at the core, it was the same kind of architecture, just scaled up.

Things got much more interesting when CPUs got write-back caches; now explicit measures were required to ensure that data written by the CPU was actually in memory instead of just sitting in a dirty cache line at the time the GPU or other bus mastering peripheral went to fetch it. It's all the same cache coherency issues that multiprocessor system architects have been dealing with for years, and in a system like the XBOne, most of the peripherals are more or less peers with the various system CPUs in terms of how they access cached data; in fact, most peripherals look like specialised CPUs, hence the "heterogeneous" part of the HSA. You don't need to explicitly flush CPU caches, or set up areas of memory that aren't write-back cached, in order for the GPU to successfully read data that the CPU just wrote, or vice versa. That's the nifty part.

I'm guessing that the XBOne, like the Xbox 360, will have its frame buffers and Z-buffers integrated on the enormous CPU/GPU chip. That will reduce the bandwidth requirements on main memory by a great deal, as GPU rendering and video output will be served by the on-chip RAM. There are other ways to get some of the same effects - the PowerVR mobile device GPUs render the whole scene one small region ('tile') at a time, only keeping a couple of tiles plus the same size of Z-buffer in on-chip RAM, then squirt the finished tile out to main memory in a very efficient way - but it does create other limitations in how the graphics drivers process a 3D scene; any extra CPU work to feed the GPU takes away from power savings given by the simpler, smaller GPU. Tradeoffs abound.

Yahoo! Japan drops UPS systems, crams batteries into servers

Jason Ozolins

I'd guess that they handle this in other ways - make another copy of the data on the server, and/or take it out of the load balancing pool before swapping the battery. Paying for redundant hardware on every node to reduce a rare failure mode is the kind of thing the huge scale companies are trying to avoid where possible.

Jason Ozolins

Re: Still need generators; PUE tricks

UPSs + backup generators + dual power feed/supply for everything is the standard approach if you want to have a battleship-style datacentre that can fight on through disasters - particularly if you are selling space to tenants and have minimal control over their behaviour/system architecture. You sell them a service level, and then you have to maintain it. Their resilience to disasters outside your usual security+power+environment+network obligations is not your problem.

On the other hand, if you are designing a scale-out system that will live across datacentres that you happen to control, you can make all the ducks line up in a different way:

- use multiple sites with diverse power and network feeds

- plan only to ride out short outages at any given site

- have non-redundant power into each rack, and into each server, but diversity in feeds to different racks

- integrate power/network topology + physical placement information into data placement/load balancing algorithms to maintain data redundancy and service availability in the face of failures.

Vertically integrating the hardware, software and hosting of your service means you don't have to pay for double the UPS/generator/power distribution/PSU capacity to achieve service-level redundancy. In this model, most of your servers need maybe a couple of minutes of uptime to ride out small power blips and also let them write out dirty data from RAM. If you treat RAM as nonvolatile, and handle redundant storage at a higher level in your stack, you can use free RAM as write-behind cache and also remove the need for a lot of synchronous filesystem writes, so you get better throughput for write-heavy workloads.

As for hiding the UPS inside the server just hiding the effective PUE of the UPS, consider that instead of building a big, easily serviceable AC->DC->AC UPS that will keep the fussiest of servers running, you get to look at the PSU schematics and build the simplest AC->DC UPS that will suffice to keep that specific PSU's outputs within spec. That's got to help a bit.

Page: