Can you afford the leccy? some of that kit is not exactly known for sipping power you know.
Where's the leccy going to come from if any more power stations get switched off.
Don't talk about Wind Power. Remember that in winters we sometimes sit under a High Pressure for several days (if not longer) with cold winds coming straight from Siberia to freeze our naughty bits off.
Otherwise, a nice thought but be careful with any USB devices they might have cloned chips and subsequently get bricked. What price your data then?
10Gb for less than $Millions?
Where are you seeing those 10Gb switches (and what are they?) I've not seen any 10Gb kit for less than 80% of current retail except for the odd PCI-e card here and there.
Re: 10Gb for less than $Millions?
There's a duel port 10Gbe card on eBay for £60 right now, but all the rest are in the £90-110 range.
A closer look at the sixty quid one show there's another £50 of customs and P&P charges on top though.
Last year I got a WatchGuard XTM8 (810) 10port/1Gbe port firewall/VPN appliance for £50 off eBay (normally £12,000) but with no PSU and no g'tee it worked.
Replacement PSU £40.
pfSense (free) installed with help from some of the forum members as it is a bit finicky having no COM1 but has COM2 on board.
Works like a dream, Intel Quad Core CPU and 2GB ram as standard.
I've still yet to figure out why I need the ability for 500k connections to my home broadband, but at least I'm covered ;)
Interesting, strokes beard
Perhaps an exercise in 3 categories: Moderate Enterprise, say small data center with some DR capacity for compute: 2 x 4 TB relational database backends, DC, file and print server 10 GB with backup of any kind., 10 GB center switches, dual Gb external gateways.
SMB: 1 TB database server, 5 TB storage GB network and a backup strategy
Single business person: 2 TB file store, excellent compute, say minimum 4 core, and print server, backup.
I know just being in right place and right time some real bargains and good free stuff can be had if one has storage until hardware required. Bring on the war stories. What could Trev come up with ?
Everything comes at a price, and the phrase "pay in peanuts, get monkeys" springs to mind.
Yes, you can get a cheap 10G switch - but what's it buffering like? Feature support? Supportability in production when things go down?
Honestly, it's not helpful to say "A switch for X currency" - we should be talking about price/port and watts/port. Both will give you a better measure of what you're actually looking at. Also, if the port is routed or switched (A routed port will probably be 20x more the cost of a switched port)
I guess it comes down to how often you expect the network to go down, or performance expectations (And if you're running big data or not, as if you are, good luck with cheap switches). These days, I'm finding that expectations for uptime on the network layer is 100% and a corresponding 0% packet loss.
Beer, because that's what cheap switches drive me to.
ummm Ruairi, they're not cheap switches they're really friggin expensive switches that are a year or ten old (see poster above who saved nay on $11k on his overpowered home-mega-router)
best read the article again bro
Aside from junk sourcing, consumer tech gets better and better. I use consumer tech for small biz linux servers and the stability is pretty impressive. Downsides like single power supplies and no iLo but in the (very) small biz world they barely have a UPS or rack let alone two independent sources of leccy. I do miss iLo though, waiting for that to move into a standard software stack instead of hardware.
Getting exciting with current consumer tech levels too, this year I'm offering 1000MB storage read/write on a server the size of a shoebox consuming 40w for less than $2k. I'd need like +$10k in enterprise spending to achieve that.
The secondhand market is now flooded with heaps of sexy kit that corps don't want. Not like enterprise kit breaks easily anyway, lasts for ages unless you're talking about a disk.
junk sourcing ftw!
I read the article again, and I still see my points as valid.
Quick maths to show:
40x10G switch @ 1,500 GBP -> 37,5GBP /port.
I'm currently looking at about 100GBP/10G port. Granted it's about 2.5x the price, but it's a) new, b) under warrenty, c) having features actively developed, d) not going to die of old age/degrading soldering/degrading components and e) has support costs included.
Also, my power draw is about 5w/port. We cannot calculate what the actual power draw is - but on older kit, I would wager it's quite higher, which will end up costing more in the long run in a DC (Where power IS a premium).
Lastly, my docs are current, and the amount of engineer hours spent on each device is minimal (I've got many devices in my network I've never had to login to - auto provision, monitoring in place, work flawlessly - the kind of features you dont find on older kit).
So I'd respectfully ask you to re-read my original comment and try follow my logic of breaking down pricing into per-unit.
For sure I'll give you that server/computer related stuff - totally makes sense to do down that route. I'm just not convinced on the networking side.
ok, sure, for 10gbps ports in high capacity environments you probably need leading edge shiny but that's a pretty damn specific use compared to what (I think) the article is getting at
10gbps retail pricing for the rest of us in normal IT land is pretty prohibitive and the storage speed has been demanding something faster than gigabit since SSDs hit the scene. I imagine gig port switches with 10gbps uplinks (to a fast server or two) are starting to become popular on the secondhand scene. You might be surprised how much <12 months old kit is around on these secondhand/ex-lease stores if you haven't looked already.
There's going to be a huge disparity in networking performance vs storage performance even down in consumer land over the next 18 months as SATAexpress (and whatever that other PCIe storage standard is called) start to become commonplace. I see that pushing more and more people into the ex-lease purchasing market whether it's a better idea or not, it's not like most people's IT budgets are increasing to allow them to get away with retail purchases when something secondhand will suffice.
Well, storage is actually gonna be a bit of a pain on cheaper 10G switches.... and NIC's (Actually - side note, our 10G NICs are probably 3x to 4x the cost of the ports they connect to)
If you're using something older that's got poor buffers (Or poor buffering architecture!), media change and store-and-forward, you're gonna have a bad time with drops in your network, specifically around SAN and storage (Since they can pump out bits quite fast for the cost)
In general, when the network tends towards melt-down the effects are more widespread than a single server going down, which is why I'm always careful in purchasing. An under performing network also is SUPER costly to re-provision (As is the staff to make it happen).
These are the reasons that I'm quite hesitant to go down the route of junkboxing. Even whiteboxing does not make sense at small scale (Since it's back to the engineer cost for setting up/maintaining/developing them exceeds the cost of the network at small scale).
Isn't this how Labs have been built for years?
I know several colleges that take this route to source cheap Powervaults for storage, and write off the electric cost as OpEx. There are some that even schedule cast-offs from other organisations into their own upgrade cycle for things like switches.
It isn't always the best solution, but sometimes when budget is extremely tight (Charities, cash-strapped Uni departments, teaching / testing labs) and time is not at a premium it is a viable option.
With a little shrewd work, it is possible to obtain a perfectly usable lab with minimal CapEx outlay
Re: Isn't this how Labs have been built for years?
"I know several colleges that take this route to source cheap Powervaults for storage, and write off the electric cost as OpEx."
In a lot of cases it's a blatently false economy and the accountants should be sniffing around.
The power savings from updating my home server (8 year old dual 5143 xeon with 32Gb of 2Gb FB-DIMMS that draw 12W - EACH!) to a modern atom server board (about 400 pounds with memory) are such that the newer board will have paid for itself in about 6 months from power savings alone - and it's a lot quieter too, because overall power consumption is down from 550W to ~90W and the fans don't need to scream.
Been doing it for years
I've frequently used 2nd user kit, at one point a previous employer was getting 2nd user kit for pennies in the pound.
Dell desktop £39.00
r710 Servers £500
MD3000i £2000 (ish depending on disks)
We built up a good working relationship with a reseller, and made damn sure our backup strategies worked (LTO5 Tape drives £500)
Occasionally projects required a longer lead time, while resellers hunted down the kit we required.
It works, and works well.
We have set up two small companies using equipment from various used equipment resellers - our main supplier is one that gives employment to disabled people who refurbish or strip second hand equipment. In both cases costs were less than a quarter of new.
Yes, we do have a service contract with both companies but haven't seen any equipment problems in the three years they have been working.
Just to show what can be done for very low money.
(All items on eBay UK - buy it now prices)
Dell 2950 Twin Xeon Quad Core X5355 32GB RAM SAS/SATA RAID PowerEdge Rack Server £299.99
Dell PowerConnect 8024F 24 Port 10GB SFP 4 Port 10GB COPPER Layer 3 Switch 8024 £1699
DELL KJYD8 Dual Port SFP+ 10GbE PCIe 10GB Broadcom 5711 Network Card £99.99
Dell Cisco 10GBASE SFP+ Twinax Cable SFP-H10GB-CU3M £49.99 each
Add SSD storage as needed (about £300 per 1TB drive and Dell drive caddy)
second French online photo website (at the time)
In 2005, as they sold out later:
- Linux firewall managing filtering, dmz and vpn sessions for home office users, replacing a horribly expensive yearly licenced Nokia device. My firewall was a 1999 era celeron 450mhz.
Webserver : 1 x4u grey box running Win2k, and the cheapest hardware raid enabled mobo, 2 gig ram and (eeeek) IIS and (ouch) ColdFusion....
- same grey box (upcycled dev server / fileserver boxen from 2001), same config, faster cpu though, running SQL Server.
- Photo storage: 2 x 2 tb D3 boxes (4x500 gb in the box), linked to the webserver with a 20 quid PCI FireWire card, 1 live, tge second would replicate with a dos script I wrote that ran at 3 am. A third 1 gb box was in the colo rack as a cold spare.
A final cheap machine was installed in 2007 with a core 2 duo 2.7 ghz and 2 gb preinstalled with iis and SQL server with a nightly dumped database reinstalled as a global hotspare.
The most expensive part was the monthly colo rental...
My media server is largely ebay sourced, at least for all the interesting bits. The "server" itself was not, its a stock i5 that I bought from components.
For the drive enclosures, I found an ebay store selling Rackable SE 3016, which is a 3U half-depth enclosure with 16 hot swap SAS/SATA drive bays, a SAS 1 expander, a PSU and a SFF-8088 cable for $100 each (+insane shipping - I bought two, total cost was ~£500).
To hook this up to the server, I got a Dell branded "SAS 6GB 8e" controller, with two external SFF-8088 ports, again from ebay, £70 (free shipping!). The trick with this one is knowing that this is in fact an LSI SAS 2008 card, after some fiddling with flashing various BIOSes I soon had it behaving as an LSI-9211-8e in IT (infrastructure) node - meaning each drive connected appears as a drive to the OS, no RAID.
To house the enclosures, I use an IKEA LACK side table, which is exactly 19" wide, and has ~6.25U of storage. The servers sits on top of the side table, the enclosures inside it. I took the backs off the enclosures, and replaced the noisy data centre PSUs with consumer silent ones, and then put a fake back on the LACK table, with cutouts for the PSUs, and 2 huge 200mm extractor fans. This makes the entire thing silent, whilst still pulling through the same CFM that the original (Delta) fans did, but without the 80db whine.
I run FreeBSD on the server, using ZFS to manage storage. The current configuration is 8 x 3 TB + 8 x 1.5 TB, for about 31 TB of usable storage. Sequential read speed from the array is around 800MB/s, most writes are async, and there is an SSD for an adaptive read cache.
As an SMB, I buy secondhand whenever I can, but the key is supportability. I learned the hard way one weekend when a Dell server went down in an odd way (CPU machine check bug associated with power state control) and the seller, a very large Dell aftermarket reseller on eBay, had no weekend support. I desperately wanted to call Dell, but couldn't.
That feeling you get when you're completely on your own, trying to get a production machine back in service on a deadline, well, it's just no fun. So buy used, yes, but be sure you can get the support you need, when you need it.
An entire small ISP
What can I build from eBay? an entire small ISP in the early 2000s.
We were primarily rural service, so we stocked up on Portmaster PM2s and rack mount 33.6K modems when everyone else was moving to 56K. Those were later upgraded to PM3s (amazing workhorses, they don't build them like that anymore!) when we were able to get inbound T1s (the PM3s coming on the market either due to consolidation of other ISPs, or ISPs that were seduced by marketing of other brands).
Upstream end was a stack of (probably gray market) CSU/DSUs tied into a menagerie of Cisco 2500 series routers (although the Boss could never understand the concept of one 2500 having 1M/1M and a wimpy version of IOS vs. a 16M router with a tricked out IOS image).
Servers were some white box in house machines, a stack of decommissioned Akami boxes, later upgraded to Dell boxes (mostly rackmount, maybe missing some decorative bezels, but functional).
All that was tied together with a batch of 3Com 3c3300 series switches.
Worked great, the biggest obstacle was understanding that if you buy a stack of 4-5 year old servers, in two years you will have a stack of 6-7 year old servers, so you'd better plan on an aggressive plan for server replacement. If you buy brand new equipment, you can coast a bit longer.
Just done this!
Emprise ISE 5000
Brocade Silkworm 220e (4Gb FC)
LTO tape library
IBM X server
Barracuda load balancer
All for under £600 believe it or not...
indeed, my personal internet server (sharing a friend's coloc bay) is a 2U HP DL180G6 12GB 12 core 2.6ghz SAS machine with 12 3.5" bays I got off fleabay for $740. throw in a couple SSD's for database and software, and a couple SATA drives for bulk storage, and it totally rocks. I host a variety of nonprofit websites and such on this. we paid $7000 for the exact same kit a few years ago at work.
It's not that unusual
After all if you don't need top notch equipment, second hand is much cheaper and the boxes need to be redundant anyhow.
For example I work at a VoIP provider and most of our networking gear in the production network was bought used.
Not everybody needs multi 10 gig networking.
eBay = Home lab bonanza
The only way I would even think of doing production junk box networking would be if so much redundancy were designed in that it wouldn't matter if you had warranty support or not. VM host blows up, oh well, at least the VM is safe and running on another host...
Company labs and home labs are a totally different story. Even without the dotcom bankruptcy sales (which were awesome back in the day) there is so much equipment being refurb'd or sold off-lease that's like new. I have a Cisco 3750G switch and a very well-licensed ASA5510 that I paid a few hundred apiece for -- easily would have been $5k even with discounts. One of the big things that I'm predicting will be up for sale more often soon is IBM System x boxes -- as customers decide they want to upgrade early to avoid the slow IBM death spiral re: support.
Anyone who's done IT work in large corporations knows that every project has "spare" hardware that somehow makes its way onto the eBay market, new-in-box, never powered on or installed. I've bought a lot of dubious-origin servers for the home lab that way. And since lots of companies lease their equipment, it winds up on eBay anyway with resellers trying to shift it.
If you can dream IT, we can build IT, even with dodgy suppliers
Just don't ask me to hang around for terribly long to support this cobbled together nightmare fuel
Flea bay for the servers
Hold out for the Storinator for storage though.
Sounds more like 'stolen box networking'.