* Posts by Nate Amsden

2437 publicly visible posts • joined 19 Jun 2007

Broadcom has willingly dug its VMware hole, says cloud CEO

Nate Amsden

Re: What are Essentials customers supposed to do?

You should be able to upgrade to essentials plus ?

https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/docs/vmw-datasheet-vsphere-product-line-comparison.pdf

I came across this PDF a few months ago found it super informative

https://nandresearch.com/wp-content/uploads/2023/12/NAND-Impact-of-VMware-Subscription-Changes-1.pdf

I'm planning not needing to procure any new VMware server licenses for another year or two perhaps the situation will be different then.

Ivanti commits to secure-by-design overhaul after vulnerability nightmare

Nate Amsden

Re: Only after the fact

They have a short memory, a few years ago there was another few big security bugs that caused a bunch of issues. I remember providing feedback on multiple support cases at the time and they would ask something like "what could we do better?" and I said make it more secure... and it seemed they got better for a while till again things fell apart again.

Fortunately none of the issues caused any compromises on my end. Thanks to El Reg and Arstechnica for the good reporting.

TrueNAS CORE 13 is the end of the FreeBSD version

Nate Amsden

Re: And what about the Clustered version?

For 3rd party TrueNAS HA support there is also https://www.high-availability.com/ which has built in TrueNAS support. I was going to use this last year then looked into TrueNAS more and realized I could not use it, as it did not appear to support any external fiberchannel storage with MPIO etc (as far as I could tell) and their newer hyperconverged stuff was even less likely to work. So instead I built a pair of Linux systems on refurb DL360Gen9s with Ubuntu 20 with the RSF-1 from that company. Support was good and quite cost effective. I think this is the same software that Nexenta used to use(maybe still does not sure) back when I tried to use them for HA about 12 years ago, though at the time it was "built into" the product I want to say I saw references to RSF-1 but maybe bad memory, apparently the tech is ~20+ years old.

One caveat with RSF-1 is can't use NFS locking if you are exporting via NFS(so have to mount with -o nolock else you risk not being able to fail over gracefully). Learned that the hard way. My use case is purely NFS exports for low usage stuff but I wanted HA, though recently wrote some custom ZFS snapshot replication scripts for this NAS cluster to store backups from some other important systems for various periods of time by sending the snapshots to it, and the ZFS data for that is stored on a dedicated/isolated storage array.

For home / "home lab" (I don't use that term for my own stuff, I have about 33 different systems between my home and my colo that I run), I do everything purely in Linux(and ESXi, though no vCenter etc), no FreeNAS/TrueNAS/appliances. My home "file server" is a 4U rackmount with Devuan linux, and at my colo my "file server" is a Terramaster 4 bay NAS with Devuan on a USB-connected external SSD.

Meet the Proxinator: A hyperbox that puts SATA at the heart of VMware migrations

Nate Amsden

Re: I configured my own for under $200

Hardware wise you can get NBD on site support for DL380Gen8 for under $250/year at least in the U.S., from several different companies(my last price from HPE for DL380Gen8+ ESXi support Foundation care was $3,259/server/year in 2018, or $1,852/server/year hardware/firmware only support(slightly different Gen8 config that did NOT run ESXi)). Of course that doesn't get you the software level support, but software hasn't changed on those in a while. And still some software is available (like iLO) without a support contract, last I checked(last year). I'm 2500 miles away from the hardware so remote support is a requirement for me. Datacenter has remote hands of course though it's far easier to have 3rd party HW support where they know the systems, and have the parts ready to go.

If you want 4 hour on site support the cost is a bit more, I don't have an exact number for a Gen8 but I don't think it's even double the cost.

I use a company called Service Express(which technically charges a monthly fee so you could do month-to-month or partial year contracts if you wanted, in my case they quoted a 2 or 3 year term, but their terms say it can be cancelled at any time with a 30-60 (forgot which) day notice or something), though have used 3 other companies in the past, all of them the costs were about the same, there's been a lot of consolidation in the space the past few years.

Aside from the little bit annoying iLO flash dying out over time (https://support.hpe.com/hpsc/doc/public/display?docId=emr_na-c04996097 doesn't affect anything if you don't use their fancy provisioning stuff), Gen8 and even Gen9(which has the same iLO issue) has been super solid for me over the past decade. Gen9 suffers from storage battery failures, but none of my Gen9s have local storage so I just ignore those failures these days.

British Library pushes the cloud button, says legacy IT estate cause of hefty rebuild

Nate Amsden

where are they going to get the money

Redoing all of that is going to cost some serious cash and seems like they were/are very short of funds. Guessing that so much old stuff was cheap to run as you don't have support and updates anymore. 3rd party hardware support on servers is probably 95% cheaper than full OEM hardware/software support. Legacy also likely means not many changes are happening as well perhaps out of fear of breaking stuff. Eventually it'll all fail of course (the legacy stuff). So those savings don't last forever.

Of course many on el reg have seen that in most cases cloud costs more. Certainly are situations where it makes sense. But it doesn't sound like they have a plan(or wasn't in the article) where the money will come from.(Or how much they expect to need/how long it will take). Seems like someone who doesn't know anything just says let's go cloud! Have dealt with several people like that over the years that don't know anything. Then are caught unprepared for the reality.

VMware urges emergency action to blunt hypervisor flaws

Nate Amsden

not too bad

I can think of only a couple of times in the past 15 years my ESXi systems needed a VM with a USB controller. And for workstation, I do use USB passthrough (on one VM) on that but not really concerned, if there is undetected malicious code there I have bigger things to worry about than VM escaping. I don't know why by default VMware assigns a USB controller to new windows systems in ESXi, I always remove it, never needed it.

Also on my linux systems I disable the framebuffer(https://docs.vmware.com/en/VMware-vSphere/8.0/vsphere-security/GUID-15D965F3-05E3-4E59-9F08-B305FDE672DD.html) to prevent any repeat exploits of that happening. On windows it's less useful without a framebuffer(assuming it's possible to have a functioning system at all with it disabled, not sure) so I leave it on of course but they make up a tiny fraction of the overall VMs in my environment.

That said, haven't had any known/detected malicious code on my systems since the [STONED] virus in the early 90s (excluding some seemingly harmless bad things. detected by virus scanners on game key generators and stuff in the late 90s).

'We had to educate Oracle about our contract,' CIO says after Big Red audit

Nate Amsden

I had to educate Oracle on their own software

Haven't had any serious Oracle software in a while but at one company went through two audits, back in 2006-2007. First audit I was a brand new employee so wasn't involved. Company ignored my advice to change from Oracle EE to Oracle SE telling me "we got it all figured out", until the 2nd audit where they got caught with their pants down again and had to pay up big (for them anyway). I recall being in that audit and educating the Oracle folks on Oracle SE CPU licensing vs per core on EE. They didn't believe me at first but later checked and verified I was right. I was quite surprised they were not aware of that basic thing at the time. The company originally licensed Oracle SE One (before I started), and their DB consultants deployed Oracle EE (Because that was their standard, perhaps they were never informed of the license requirements). After 2nd audit then I moved everything to Oracle SE and optimized things quite well, such as changing from Dual socket/Dual core processors to single socket/quad core (in a couple of cases HP had to replace the motherboards as they advertised their DL380G5 supported quad core, but the early generation did not, they later updated their docs to reflect this).

A couple jobs later I licensed a tiny Oracle SE 20 user something license for VMware vCenter(so I could run the DB on Linux, I had some history with Oracle unlike DB2 which was the other option, wasn't going to use MSSQL since I wanted Linux and that didn't run on Linux at the time). I believe I kept it in compliance without issue the whole time I used it(retired it when I deployed a new vCenter 6.5, and retired the 5.5). Though Oracle would often ask me what I was using Oracle for, when I told them they always shut up fast and never brought up any possibilities of audit.

Main thing I miss about Oracle DB anyway is enterprise manager's performance stuff, that was so cool. And at least with Oracle 10G you could use it (even though technically it required a EE license) with Oracle SE, so at that job in 2007 that is what I did, and I could wipe all evidence of it easily if needed with a single command. Oracle 11G or 11GR2 or whatever closed that licensing loophole, so at my next gig with the vCenter stuff I was unable to install that performance pack to enterprise manager. I haven't seen anything come within 1000 miles of being as useful as that instrumentation and the web based ease of use for identifying things since. Fortunately my day job is not DBA so I guess it's not my responsibility, still was so nice to be able to easily self serve information like that.

HPE blames GPU shortage for contributing to unexpected sales slide

Nate Amsden

plenty of time

20 weeks to get your servers seems like plenty of time to prepare your datacenter ..

I remember the first company I was at that used HP servers. It was in 2003. Took a while to get stuff I guess I later learned mainly due to integration issues with compaq. So i remember we had every server shipped overnight.

A path out of bloat: A Linux built for VMs

Nate Amsden

Re: Existing microvms

When I was reading this I thought of containers too. Not sure what the author's point is, but from someone who has been using VMware for 25 years(and Linux for 28 years) this concept doesn't sound useful to me. One of the big points of VMs is better isolation, I want local filesystems, local networking, etc in the VM. If I don't want that overhead then I can/do use LXC which I have been using for about 10 years now(both at home and in mission critical workloads of stateless systems at work), never been a fan of docker-style containers myself.

When I think of a purpose built guest for a VM it mostly comes down to the kernel, specifically being able to easily hot add and more importantly hot remove CPUs and memory on demand (something that VMware at least cannot do). I think I have read that HyperV has more flexibility at least with Windows guests and memory but not sure on specifics. Ideally having the OPTION (perhaps a VM level config option) that if say for example CPU and/or memory usage gets too high for too long *AND* there is sufficient resources on the hypervisor, that the guest can automatically request additional CPU core(s) and/or memory, then release those after things calm down. I believe in Linux you can set a CPU to "offline"(have never tried it so unsure of the effects, if any), but still can't fully remove it from the VM in VMware(at least, unsure about HyperV/Xen/KVM) without powering the VM off.

Side note Linux systems I guess can freeze if you cross the 3GB boundry hot adding memory so VMware doesn't allow you to go past 3GB if you are below 3GB, which is a bit annoying which means if you built a VM with 2GB memory and want to hot add to 4GB it requires the VM be shut off, so fixing that would be another nice thing for a purpose built VM guest OS too.

Most distro specific issues especially hardware drivers of course are basically gone in VMs. I spent countless hours customizing Red Hat PXE kickstart installers with special drivers because the defaults didn't include support for some piece of important hardware, the most problematic at the time (pre 2010) was probably the Intel e1000e NIC as well as Broadcom NICs too sometimes(and on at least one occasion needed to add support for a SATA controller). Can't kickstart without a working NIC.. but wow the pain of determining the kernel, then finding the right kernel source to download, compile the drivers, insert them into the bootable stuff, I think that is the only time in my life that I used the cpio command. Intel had a hell of a time iterating on their e1000e NICs, making newer versions of them that look the same, sound the same, but only work with a specific newer version of the driver.

Exception may be windows on the drivers front, I've installed a bunch of Windows 2019 servers over the past year in VMs, and I have made it a point to attach TWO ISO images to the VM when I create it, the first ISO is for the OS itself, and the 2nd ISO is for a specific version of the vmware tools ISO that has the paravirtual scsi drivers on it (newer versions of the ISO either don't have the drivers or they didn't work last I checked). Just so I don't have to mess around with changing ISO images during install. Don't have any automation around building windows VMs as I'm not a windows person, but have quite a bit around building Ubuntu VMs. So strange to me that MS doesn't include these drivers out of the box, they've been around for at least 10 years now. Not sure if they include VMXNET3 drivers, I don't need networking during install, and installing vmware tools after install is done is the first thing I do which would grab those drivers.

I never touched Plan 9 I don't think, but the name triggered a memory of mine from the 90s when I believe I tried to install Inferno OS(and I think I failed, or at least lost interest pretty quick) https://en.wikipedia.org/wiki/Inferno_(operating_system) "Inferno was based on the experience gained with Plan 9 from Bell Labs, and the further research of Bell Labs into operating systems, languages, on-the-fly compilers, graphics, security, networking and portability."

Perhaps someone who knows more(maybe the author) could chime in why they are interested in Plan 9 and not Inferno, as the description implies Inferno was built based on lessons learned from Plan 9, so I assume it would be a better approach at least in their view.

I dug a little deeper into Inferno recently and found what I thought was a funny bug report, the only bug report on it for github, for software that hasn't seen a major release in 20 years(according to wikipedia anyway)

https://github.com/inferno-os/inferno-os/issues/8 the reporter was suggesting they update one of the libraries due to security issues in code that was released in 2002. Just made me laugh, of all the things to report, and they reported it just a few months ago.

side note I disable the framebuffer(?) in my Linux VMs at work by default

https://docs.vmware.com/en/VMware-vSphere/8.0/vsphere-security/GUID-15D965F3-05E3-4E59-9F08-B305FDE672DD.html

if you do that you need to update grub(these are the options I use,I suspect only the nofb and nomodeset are related to the change):

perl -pi -e s'/GRUB_CMDLINE_LINUX_DEFAULT.*/GRUB_CMDLINE_LINUX_DEFAULT\=\"spectre_v2=off nopti nofb nomodeset ipv6.disable=1 net.ifnames=0 biosdevname=0\"/'g /etc/default/grub

perl -pi -e s'/^#GRUB_TERMINAL/GRUB_TERMINAL/'g /etc/default/grub

if you don't do that in grub you'll just see a blank screen in VMware when the system boots.

There has been at least one security bug in vmware related to guest escape and the framebuffer or something over the years(maybe just this https://www.blackhat.com/presentations/bh-usa-09/KORTCHINSKY/BHUSA09-Kortchinsky-Cloudburst-PAPER.pdf) so I figure disable it since I don't need it anyway.

Oxide reimagines private cloud as... a 2,500-pound blade server?

Nate Amsden

reminds me of SGI cloudrack

15 years ago ...

https://www.theregister.com/2009/03/18/rackable_cloudrack_two/

Hardware wise anyway

"This time around, the trays don't include the power supply, which has been shifted out into the rack enclosure itself and which provides direct conversion from three-phase AC power coming out of the data center walls to 12V power needed by the servers on the tray. So, the "server" doesn't have a cover, doesn't have any fans, and doesn't have a power supply."

I was looking at using these at the time for my (then) org's first hadoop cluster. VP of group decided to go another direction cutting corners on quality and cost. I left shortly after. Was later told their new vendor had a 30% hardware failure rate and with quorum requirements it meant the first year of operation the cluster was at half capacity. I had a great laugh. Company is long dead now. I enjoyed the 2nd best commute though(short of working from home). The office was literally across the street from my apartment. I actually had co workers driving in parking further away to avoid paying parking fees.

Cloudflare joins the 'we found ways to run our kit for longer' club

Nate Amsden

found a way

They found a way...

"Hey procurement person - yeah you, no need to do any work this year do it next year."

Meanwhile I just checked I have a few HPE DL380Gen8s that just entered their 11th year of service in the past week. Serving important(but not critical) roles. And my oldest switches from when my org moved out of the cloud still in service(software support ended last year but hardware support continues to 2026) for 4,434 days according to their odometer readings (translates to 12.1 years), though I do plan to replace both sets of these old systems this year. The switches are in critical roles, though are in a redundant configuration (Extreme Networks X670V-48x).

I'm sure others have older of course, but 5 years is a very low bar.

Dell kills sweetheart distribution deal with Broadcom's VMware

Nate Amsden

historically esxi came by default with a 60 (or maybe 30) day eval license with all features unlocked, so shouldn't be an issue. Now maybe Broadcom may forbid any distribution of it for whatever reason these days, but as of yesterday Dell was still offering esxi (no license included) as an option when ordering a server.

then if the customer likes what they see in the eval license they can go buy the real thing..

Nate Amsden

Re: OK, I'll bite

Dell will sell you servers with Windows data center preinstalled (unlimited VMs), as well as Red Hat/Ubuntu/SuSE(all 3 are subscription based). I don't see an option for the "free" HyperV (assuming that is still around) to come preinstalled anyway. I suspect HPE is the same though Dell's site is easier to navigate to find what options are available.

There's obviously not much market demand for Xen or even KVM as a hypervisor for typical Dell customers (the ones that want it will install it themselves), or they'd offer those. Red Hat has a formal KVM-based virtualization offering(which I do not see listed as able to purchase along with a Dell server), and SuSE/Ubuntu have KVM as well, Ubuntu has put extra effort into LXC/LXD as well I think (I first started using LXC back in 2014 still use it today).

Those more hard core on the open source end of things will just do it themselves regardless.When I bought HPE servers with VMware licensing they never came preinstalled, just had the license, which HP then synchronized to VMware's license portal so it was all in one place. All of my critical VMware systems are boot from SAN anyway with no local storage so they wouldn't of been able to install anyway (though some early systems were mistakenly shipped with SD cards with VMware pre-installed which caused confusion for me years later until I figured that out and removed the cards).

Not sure what may happen(unless it happened already) to the "free" ESXi licenses, which were a bit limited but still handy for things (I use them on a couple personal servers at a colo, no need for vmotion/etc).

Nate Amsden

Re: Don't care if ESXi is installed when I buy the Dell system.

You might care if you will be forced to pay for(and support fees) Tanzu, vSAN, and Aria Suite (the new "Operations Manager" I think?) for those Dell systems whether or not you ever plan to use those features, and per core licensing on annual subscription (which I assume means if you don't pay the subscription the software stops working), as VMware no longer sells ESXi as a standalone product(perpetual or subscription).

Nate Amsden

When I checked the Dell website about a week or two ago, the only VMware options was they would install the hypervisor without a license if you wanted, couldn't sell you a license. And for HPE, I found one link to the HPE version of VMware, but when clicking on the link gave me a 404. HPE still listed VMware as compatible but the data sheet I looked at did not list any Vmware software that could be purchased. Which is when I dug deeper and came across this (https://nandresearch.com/wp-content/uploads/2023/12/NAND-Impact-of-VMware-Subscription-Changes-1.pdf) which was super informative. I thought I knew what I needed to know by reading El Reg but these specific details escaped me until I saw the PDF. I had been assuming, till that point that the bundling was specific to the cloud product suites(and even at the time I looked at VMware's own store and they were still selling vSphere and vCenter, however they no longer have those products on their store now).

I have no doubt all vendors will continue to certify/support with VMware as long as there is a market demand for it.

Nate Amsden

Puts a lot more pressure on VMware's tech support team(s)

At least with HPE, and I assume Dell and probably others, when you bought VMware through them they typically provided the tier 1 & 2 vmware support (at least for hypervisor+vCenter can't speak to other products), they could escalate to VMware's team as well if needed. (though have heard in general all of them are pretty terrible across the board for support quality).

Fortunately haven't really needed their support in a long time(only have had one serious support incident to me with VMware in the past 17 years of using ESXi, when a windows based vCenter corrupted itself(OS level corruption, though the DB was in Oracle on a Linux system and was unaffected) and I was just looking for the best way to recover out of paranoia, in the end the recovery was simple(reinstall new OS, new vCenter, point at existing DB) just took support a week to communicate it properly, didn't know if doing that would cause the hosts to freak out or not and in the end hosts didn't care as I assumed they wouldn't but wasn't 100% sure), keep configs simple and years behind the curve means less problems(and less headaches dealing with support).

But it sounds like Broadcom already terminated the deal by saying Dell (and HPE and others) can't resell their stuff. Not as if it being a subscription has anything to do with it, Dell I'm sure resells subscriptions to lots of things, tons on the Microsoft side at least. Will be interesting to see the fallout over the year, I'm assuming it will be pretty big, but time will tell of course.

Ivanti releases patches for VPN zero-days, discloses two more high-severity vulns

Nate Amsden

factory reset only if you are hacked

Unless you are super paranoid I guess. Their more in depth docs are more clear

https://forums.ivanti.com/s/article/Recovery-Steps-Related-to-CVE-2023-46805-and-CVE-2024-21887?language=en_US

"If your ICT scans were clean, you are not affected by this activity."

Basically if the external integrity check fails then you should factory reset.

Their new mitigation to work around the SAML thing seems to break SAML entirely(fortunately I don't use that..yet. But Duo is forcing everyone to SAML soon who uses Ivanti Secure, at least those that want the fancy Duo UI integration).

As Broadcom nukes VMware's channel, the big winner is set to be Nutanix

Nate Amsden

Wondering on the backlash

I wasn't quite up to speed on this as I assumed I was but got myself up to speed now(via https://nandresearch.com/research-brief-impact-of-new-vmware-bundles-pricing/). After learning what I learned regarding the bundles and no longer selling the standalone products(which I was surprised to see after checking both Dell and HPE's site for vmware related things and not finding anything), I have to think back to VMware's "vRAM tax" stuff they tried a decade or so ago, which they backtracked on eventually(took a while for them to realize this, and fortunately never affected me as I was on an older version and didn't upgrade until a while after things reversed).

Wondering after a year or so if a bunch of customers jump ship would Broadcom reverse course(maybe not entirely, but to some degree), maybe or maybe not I don't know all depends on the market reaction.

Broadcom ditches VMware Cloud Service Providers

Nate Amsden

Re: The End

I'm guessing these cloud providers use more sophisticated vmware software (probably vCloud and other stuff that I've never used) and Proxmox wouldn't be a suitable alternative out of the box (any more than ESXi+ vCenter by itself would be). They'd probably have to do quite a bit of custom work(provisioning/billing/resource management) which was probably why they decided on VMware to begin with to reduce the amount of work their staff had to do. Of course the biggest cloud players do all of that work anyway so don't need vmware.

They could probably try the OpenStack route but that would be a whole other can of worms with tons of new skills/staffing required in most cases, the small providers won't have the expertise to do it, and it may not be cost effective because of that.

sucks for them for sure though.

Official: Hewlett Packard Enterprise wants to swallow Juniper Networks in $14B deal

Nate Amsden

does juniper do much in the "AI" space?

I have been watching the networking space off and on since about 2003. Juniper tried to buy Extreme way back around that time before deciding to go it alone and make their own switches. I remember their first 10G 1U switch, didn't even run their software they OEM'd some 3rd party platform and put their name on it. They came out with their own stuff of course eventually.

As time went on it seemed they struggled to gain much share outside of their service provider space(basically they were mostly selling to people already buying their routers). Haven't noticed much has changed in that regard, so seems strange to me that HPE would place this kind of a bet with "AI" as a justification. Maybe it's just due to the hype surrounding it, makes it more likely their shareholders approve of the purchase or something.

Juniper certainly has a lot of solid tech though the complexity of their JunOS is pretty crazy to me(same goes for Cisco). I'm sure it makes sense for several use cases though.

As someone who does networking as only a minor part of my role(though through every company I have worked for in the last 20 years my networking expertise was above most everyone else's in the org), Extreme's simplicity and functionality beat everything else on the market(been using them since 1999). Hell I still use network designs I first started with in 2004 because they work so well(such designs wouldn't apply to "AI" workloads but I don't have any of those). Not that Extreme has ever really taken off in the market either, I remember a Foundry Networks rep back in 2004 trying to convince me Extreme was going out of business in the next couple of years.. ironic that it was Extreme who acquired several of the Brocade(Foundry) network assets over a decade later.

HPE seems to have so many different switching platforms under their roof, hopefully they can consolidate the user interfaces at least(pretty radical differences between some of them anyway).

Certainly seems to me if AI networking was a super important thing to HPE they could do a much better job sourcing the same merchant silicon ASICs and making their own high end stuff for a fraction of the money. They obviously have the server/storage(and even networking, Aruba switching is super popular among network engineers(I have never personally tried it)), market penetration already.

Curious for anyone who knows - does HPE have (m)any supercomputers (Cray, etc?) out there running on Juniper network gear? I assume a lot of it is infiniband or something. Curious because I'd expect similar networking requirements for AI.

Looking at Juniper's stock price since the dot com era, I was quite surprised how little it has appeared to move in the past 20 years.

Singapore wants datacenters, clouds, regulated like critical infrastructure

Nate Amsden

weird

Seems all of this should be directed at the companies(Banks or whomever) that are using the providers. The providers have SLAs, and it's up to the customer to determine what SLA is acceptable and how to plan for (or IF to plan for) breaches of that SLA. Even more important for most cloud providers as I have been saying for 12 years now they are "built to fail", meaning you have to build with failure in mind, much more so than on prem (which more often has more redundancy built in whether it is redundant storage, vmware high availability, fault tolerance, high availability networking etc). Things fail in cloud 10x more often than on prem in my 20 years of experience.

You direct the businesses of Singapore that fall under this to have to maintain some kind of SLA then it's on them to choose providers/design/etc in order to try to meet that SLA.

If you try to enforce better SLA terms on the major cloud companies they will likely just laugh at you. Even Microsoft tries to tell their Office365 customers to keep backups of the Office365 data as MS is not responsible for that.

The worst uptime of any company I've worked for was about 20 years ago for me, company I was at provided online transaction processing for major U.S. telco mobile transactions. Our SLA was 99.5% of unscheduled downtime per month, that EXCLUDED 12 hours of scheduled downtime for software upgrades each month. We missed the 99.5% SLA most of the time anyway I think, mostly due to bad software design. I recall one ~30 hour storage array outage, another ~20-30 hour Oracle DB outage(Oracle flew on site, and said there is nobody else in the world doing what we were doing, we were doing it wrong, and to fix our app), another ~12 hour internet outage (due to bad BGP routes sending our traffic to Russia resulting in 99% packet loss), many multi hour BEA Weblogic outages and dozens of others. Learned a lot(critical to my future career, didn't realize it at the time), had fun, burned out hard core though took years to recover. Despite all of those outages never had a single blip in data center availability. If you bought any games/apps/ringtones/wallpapers/etc on the major U.S. carriers 20 years ago those transactions flowed through our systems.

VMware channel partner rates new product bundles and subs-only licenses 'very attractive'

Nate Amsden

Re: perpetual licensing still available for vsphere?

I noticed that though specifically was referring to the vsphere enterprise plus 1 cpu with production support. Cost seems the same as before. Same as it's been for what seems like 13 years.

Nate Amsden

perpetual licensing still available for vsphere?

Maybe a mistake, or maybe the people responsible aren't with vmware anymore but I noticed a few days ago (and checked again now) that the basic vSphere licensing appears to be perpetual still on vmware's store.

https://store-us.vmware.com/products/data-center-virtualization-cloud-infrastructure.html

The pricing looks to be about the same as well.

I've been a vmware customer since 1999, the only products from them I care about are vSphere, vCenter, and Vmware workstation.

I forgot to check if VMware had their typical black friday sales for workstation this year, though my version is still current from last year seems like. Normally that is when I buy it for myself, every 2-3 years. I still have vmware workstation for linux going back to v3 (Nov 2001 is the time stamp) just for nostalgia, misplaced my "Vmware for linux 1.0 (or 1.0.x?)" CD a decade ago sadly.

Not even LinkedIn is that keen on Microsoft's cloud: Shift to Azure abandoned

Nate Amsden

Re: Converting Hotmail from UNIX to Windows 2000

yes that looks like a more detailed version of what I read back then. Specifically cites FreeBSD. I recall something along the lines of the FreeBSD install was maybe 50MB or something vs windows was several hundred megs or gigs at the time, which made it impossible(or at least real hard) to deploy the systems over the network at the scale they were at.

Nate Amsden

I don't recall them trying even once. They tried a lot in the early days to get to windows off of freebsd i think it was. I assume that "windowa core" was the result of those efforts trying to make a smaller OS.

But good to see yet another example of people admitting failure in public cloud instead of just doubling down and paying more just to show they can do it. Not as if MS has shallow pockets.

Kernel kerfuffle kiboshes Debian 12.3 release

Nate Amsden

dselect

Debian had dselect before apt, and it handled dependencies, it was a lot more painful to work with though for sure. I still remember on one install spending about 6 hours going through the various packages in dselect to find what I wanted because after the install was done I didn't know how to start that tool again(or even what the name of it was). Though back in those days I still built a lot of things from upstream source directly(including KDE, GCC, Gnome, X11, kernels(last kernels I regularly built were 2.2.x)).

For me, apt started to become much more useful with Debian 3 which for whatever reason I believe I recall as having a huge jump in number of packages available(the first huge jump in my memory at least), it was the only Debian version where I used "testing" for several months (instead of "stable"). Reading the release notes of Debian 3(tried to find the number of packages), they specifically said apt-get for upgrading the (distribution) version was not recommended(and to use dselect instead).

Even though dselect has been depreciated for over 20 years it still serves a critical role for me for copying package states between systems(not something I do often but it is handy)

dpkg --get-selections >selections.txt

(copy selections.txt to another system

dpkg --set-selections <selections.txt

apt-get dselect-upgrade

Perhaps there is another way to do it these days. I recall at one point trying to find a similar process on a RPM-based system(don't recall which), but was not able to.

My systems at work are pretty much all built with configuration management software so little or no need to do that process there, but my ~20ish home linux systems(including my public email/web/dns) are managed ad hoc still.

Steam client drops support on macOS, but adds it on Linux

Nate Amsden

Wasn't Microsoft that did this, well at least if I recall right. It is the x86 64 platform itself (hardware) that dropped support for 16bit while running in 64bit mode. I think it had to do with the registers on the CPU. I assume just a way to make things more simple (and cheaper),

Rackspace runs short of Cloud Files storage in LON region

Nate Amsden

Re: Move to something else

I'm pretty sure that this storage isn't using SSDs for stuff beyond metadata, bulk storage like this is always on NL drives. If only a few % of requests are failing it sounds more like uneven distribution of data across the storage systems, there may be lots of space available overall but it's distributed across several different silos (most likely) and some of those could be getting full resulting in slower access times or allocation errors (just guessing of course).

Some storage systems at least in the past if you were 80% full it was too late already, you had to start planning to add more at 50-60% full. All depends on how well the storage stack handles such situations, I've never used Openstack and have never managed an object storage system so can't say myself. Another factor could be fragmentation, and reclaiming deleted space for use by other things. The systems may be struggling to re balance themselves under the load too, perhaps due to maybe limitations in rack space and/or power which perhaps Rackspace was trying to eek too much capacity out of too little of space(meaning maybe they need more racks/power and perhaps don't have enough available).

Often times the operator of the storage system doesn't know/expect such problems to exist until they actually encounter the situation for themselves. I know such situations has happened to me on many occasions, the one I like to cite the most perhaps is back in 2010 I deployed a new NAS cluster to my 3PAR T400 array, the companies were partnered with each other so they knew each other's abilities. The NAS vendor assured me they were thin provisioning friendly. After a month of operation that proved to be a lie. Fortunately the 3PAR system had the ability to dynamically re-allocate storage to a different RAID level to give me more space without application impact. To add to that point we were at the limit of the number of supported drives in our 3PAR array, if we wanted to add any more we'd have to purchase 2 more controllers(for a total of 4), which was not cheap. I spent 3-4 months running the data conversions (99% of it handled in the background on the array) and we got things converted in time so that we did not need to buy any more drives to support the system, though we ended up buying more drives later anyway because we wanted to store more data so we did buy the extra 2 controllers and another hundred or so drives maybe a year later.

but in short it is on them to know this kind of stuff in the end regardless.

Server sales down 31% at HPE as enterprises hack spending

Nate Amsden

Major cloud providers have been building their own servers for ~10-15+ years. Assuming of course when you say "purchase from the majors" being HPE/Dell/Lenovo/Cisco/etc (and not referring to the Asian "ODMs"). Just for reference the open compute project is apparently 12 years old, though major cloud companies were building their own long before that as well. Google has been building their own probably since they were founded.

Pulled these links from my retired blog from 2011.

Microsoft Reveals its Specialty Servers, Racks

https://www.datacenterknowledge.com/archives/2011/04/25/microsoft-reveals-its-specialty-servers-racks

Facebook Unveils Custom Servers, Facility Design

https://www.datacenterknowledge.com/archives/2011/04/07/facebook-unveils-custom-servers-facility-design/

Trio of major holes in ownCloud expose admin passwords, allow unauthenticated file mods

Nate Amsden

Re: wtf is a pre signed url

minor update - even after upgrading(from 10.9 to the latest) I can find no sign of this feature anywhere(not that I need/want it) - though I do see in the changelog it says "Bugfix - Disallow pre-signed URL access when the signing key is not initialized", though the changelog wouldn't make me think this was a important security thing by just reading that.

sort of makes me feel better that some security stuff is getting exploited in owncloud, I mean it makes me think at least there folks looking at the code and fixing some things.

Nate Amsden

wtf is a pre signed url

Been using owncloud for over a decade never heard of it.. poking through the source code I came across this:

https://github.com/owncloud/core/pull/38376 "pre-signed download urls for password protected public links"

I did a few web searches and the only hits for such a thing were mentions of this security advisory that I could find.

Seems like a way to allow clients that don't support cookies to access password protected things? I don't see any way in the UI to enable or disable that function(or if it's already enabled I see no option for using it). I suspect my owncloud servers have no users that would ever use such a function anyway(not that I won't patch soon). I've certainly never had the need to do that. If I needed to host something for a client like wget or something I host it outside of owncloud.

OpenCart owner turns air blue after researcher discloses serious vuln

Nate Amsden

missing competitor

Looks like Opencart is an open source shopping thing. So I'd say their biggest competitor is Magento (which has open source, commercial, and hosted offerings I think), not those other SaaS platforms mentioned(though they could be the biggest in the space). Part of the point of having the source is being able to customize it beyond what SaaS allows.

I worked at a couple of orgs that used Magento enterprise(hosted on our own gear), though the last version that we used was 1.14 I think (which is probably 8-10 years old now).

Canonical intros Microcloud: Simple, free, on-prem Linux clustering

Nate Amsden

Not done if you are running windows VMs on top of any hypervisor (HyperV included). Have to either buy data center or license the underlying hardware(+ number of VMs) for windows server(even if you are just running one Windows VM on a multi node cluster have to license all of the nodes, same as if you were using Oracle DB - though ironically is not the case for SQL server (provided you have an active software assurance contract))

Clorox CISO flushes self after multimillion-dollar cyberattack

Nate Amsden

Re: When not if

google is also looking to yank internet access from many employees because apparently zero trust isn't enough

https://www.theregister.com/2023/07/19/google_cuts_internet/

Bing Chat so hungry for GPUs, Microsoft will rent them from Oracle

Nate Amsden

Re: Be afraid, very afraid.

Perhaps ublock origin blocks them I am not sure but I don't recall ever noticing ads for Oracle on bing searches with firefox on desktop linux or firefox on android both with ublock origin.

Switched to bing myself from google maybe 4-5 or 6 or 7 years ago I forget now, never have felt a need to go back it works fine for me.

I've never used bing chat, or any of the other AI chat things myself, haven't had a use for them yet anyway.

MariaDB Foundation CEO claims 'sanity' has returned to MariaDB plc

Nate Amsden

Re: need a cloud version.. but there are cloud versions already

I think the difference is that the non profit(?) foundation is driving that partnership with amazon not the for-profit entity.

The for profit entity obviously has some proprietary cloud sql stuff(skysql and that other thing), should have made a better effort to partner as a technology partner and not try to make their own cloud service(running on top of someone else's cloud), that's a no win scenario unless you have bucketloads of cash which they obviously don't have.

Look at Splunk, running at a loss every year since they were founded and have $3B in debt. Also look at ....snowflake? I think they have something similar (building a cloud on top of another cloud), I'm not a financial person but their recently quarterly report implies to me almost a $300M loss for the quarter https://investors.snowflake.com/news/news-details/2023/Snowflake-Reports-Financial-Results-for-the-Second-Quarter-of-Fiscal-2024/default.aspx

Nate Amsden

need a cloud version.. but there are cloud versions already

I know what they mean it just makes me think they need to let go of that concept

---

Despite the plc's decision to ditch SkySQL, Arnö said users of the MariaDB DBaaS would not struggle to find a new home as AWS Relational Database Service offers a MariaDB service, as do other cloud providers. He said the plc was likely to have another attempt at providing its own database service.

"It's so evident that all databases need a cloud version. You're not the proper database if you don't have a cloud version," he said.

---

that person says in one breath there are MariaDB cloud services out there already so their strategy instead should be working with those companies to make those services better and get a cut of the sales. Not that think such providers would be too open to that(some would be), but it's better than trying to go head to head, especially after ditching their recent cloud stuff. Customers are going to be even more weary that it will happen again, unless MariaDB finds a load of cash that they can run services at a loss for years for, since that is likely what it will take.

Nate Amsden

if you are using linux, and the mysql included with it, you may be using MariaDB and not even realize it

https://mariadb.com/kb/en/distributions-which-include-mariadb/

Can't really compare(IMO) postgres to maria/mysql, unless you are building the app to support one or the other. Most apps I've encountered in the past 20 years tend to support just one or two DB products, rarely have I seen one that supports postgres and mysql, though I'm sure they are out there.

So it comes down to what DBs does the app(s) you use support, also what you are most comfortable supporting. Wouldn't suggest abandoning one DB engine whatever it is for another unless you are ready/able to take on the new one. Postgres and MySQL are very different animals. (I say that as someone who has primarily a mysql background, and I'm NOT a DBA, but postgres is radically different from an day to day management standpoint)

Splunk sheds 7% of workers amid Cisco's $28B embrace

Nate Amsden

saw an interesting comment from a person on linkedin a month ago, unsure if the info came directly from them or if they quoted another source

"Since its inception, Splunk has incurred net losses yearly, resulting in an accumulated deficit of $4.05 billion. Splunk has bet its future on costly cloud services that require continual infrastructure investments. Splunk’s $3.099 billion in debt exceeds its annual revenue. "

Which surprised me, I assumed Splunk was doing much better than continuing to make losses every year since they started, hard to imagine what they could do to get that debt under control in that situation(other than get bought I suppose).

Mozilla treats Debian devotees to the raw taste of Firefox Nightly

Nate Amsden

wonder if they will have a repo at some point

MS edge for linux installs a config so apt can download updates direct from https://packages.microsoft.com/repos/edge/ . Firefox is my main browser though I did switch to edge for MS teams on linux as well as OWA(prefer to have things in different browsers, in the past I used another instance of firefox, as well as Seamonkey).

After switching back to Firefox from Palemoon(after palemoon decided to break all of the addons though I think they reverted that decision later) have been with ESR ever since(took some time getting over losing my addons that I had used for a decade or more many of which would never be updated again), manually install to /usr/local/firefox-<version> and symlink /usr/local/firefox to that. I guess no proper OS integration, though I run my firefox as a different user account "sudo -u firefox -P -H VDPAU_NVIDIA_NO_OVERLAY=1 /usr/local/firefox/firefox %u" (unsure if that NVIDIA option is still needed these days or not(though still get video tearing, eventually realized it wasn't a linux issue as there is tearing in video on windows as well on the same hardware).

Edge runs under another account as well. Was really annoying to configure Pulse Audio to work right, but eventually figured it out.

Apple lifts the sheet on a trio of 'scary fast' M3 SoCs built on a 3nm process

Nate Amsden

Re: Linux is twice as efficient in memory than windows I've found.

As a linux on the desktop user since 1998, I can say that the Linux kernel has very annoying memory behavior on laptops with 8GB of ram in the past ~8 years or so. 10+ years ago things were fine(at least for me). I went from a laptop with 8GB in 2010 to a laptop with 8GB in 2016, actual memory usage was about the same (peaking close to 7GB), the newer system(ran same OS Linux Mint 17(years later installed Mint 20), but probably newer kernel I don't recall) would frequently get into swap hell, completely frozen while it decided it wanted to swap out a bunch of stuff (the "swapiness" setting doesn't do anything anymore) for several minutes(everything was SSD of course). Never happened on the older system. Wasn't until I upgraded to 16GB+ that it stopped doing that. Drove me mad. More recently(started with 5.4 kernel on Ubuntu 20.04) have seen the kernel decide to swap stuff out on servers when there is literally 10GB of available memory sitting there. Never happened(to me anyway) that I can recall on older kernels.

Currently run a Lenovo P15 laptop with an 8 core Xeon, and 128GB of ECC ram (~108GB available), not that I need that much, just decided to max it out for no reason. Last laptop(Lenovo P50) I eventually upgraded to 48GB(max was 64), though probably never went past 20GB usage.

But even my new laptop had issues with memory(at least with vmware workstation), until I set this sysctl "vm.compaction_proactiveness=0", to stop the kernel from trying to defragment memory (which was a feature introduced in newer kernels than 5.4, older laptop had an older kernel which didn't have this issue even though the OS was the same (Mint 20)). With that feature on, system would freeze for periods of time while it defragmented the memory (for me it was rare as it took a long time to fill up the 128GB of memory, only managed to do it by copying a large amount of data, filling the memory with cached stuff).

so yeah, linux and memory have really gotten annoying (for me) in the past ~8 years or so.

VMware reveals critical vCenter vuln that you may have patched already without knowing it

Nate Amsden

I guess you are on 6.5 because you lack licensing for 7? Since this is a vCenter bug not a hypervisor bug, and vCenter 7 can manage ESXi 6.5 without any issue. No special hardware requirements for vCenter 7.

Citrix urges 'immediate' patch for critical NetScaler bug as exploit code made public

Nate Amsden

wondering if these exploits work...

..against Citrix VPN systems that have no users connected? Seems like exploitation requires at least one user session to be active. I disabled my main access gateway("disable vpn vserver <name>") a few months ago(following the last exploit), I did used to use access gateway, and plan to get back to using it again, but upgrading from Netscaler 12.1 -> 13.0 broke duo authentication and I haven't gotten round to fixing it yet, so was sitting without anyone connected to it for a few months (I was the only one that used it, everyone else uses another VPN which I am now using as well, though liked Citrix more)

Netscalers by default I believe come with a 5 user license, so wouldn't surprise me if there are a lot of systems out there that may have VPN configured just don't get much usage, in my case I kept it alive over the years primarily as a backup VPN for other IT staff if there was an issue with the primary system (which uses a completely different vendor/product), didn't cost anything since we needed the load balancers for load balancing anyway. I still patched anyway, even though likely the exploits would be completely ineffective as my only vpn vservers were disabled.

Analysts scratch heads over MariaDB's decision to ditch DBaaS crown jewels

Nate Amsden

Re: Well Nate,

that's assuming their legacy base does move on, if they are using MariaDB they are using it for a reason, otherwise they would be using Oracle's own MySQL.

If some do move away from "hosting it themselves" (whether it is "on prem" or "in cloud") to a DBaaS then there's a lot of reaching there to assume those customers would go to Maria's DBaaS platform regardless. I'd wager most MySQL DBs out there are really basic setups without much data in them(think wordpress and such), and are not in need of a premium DBaaS DB offering.

I have no doubt customers like Samsung were woo'd by huge discounts and other things to get them to be token customers to show off to other potential customers as validation. Samsung isn't exactly lacking $$ resources to have DBAs run things, they probably have a lot in house for other databases anyway.

Well another reason they may be using MariaDB as it is available (and in some, perhaps many) cases the default mysql server on many linux systems https://mariadb.com/kb/en/distributions-which-include-mariadb/

Nate Amsden

wtf do these analysts suggest

Obviously mariadb was running out of cash. They made the right choice to focus on their core tech. They spent months searching for sources of funding.

It was never a good idea to go directly compete head to head with the bug cloud players without a ton of cash. Apparently Mariadb raised only 100 million from their IPO(the rest going to investors)? The fact they had to use a SPAC was a very bad sign.

Maybe analysts would be happier if mariadb stayed the course in cloud but go out of business entirely by the end of the year (or whenever that was due to happen).

Maybe some cloud provider would be interested in tossing cash to mariadb to buy their cloud tech and hire that staff to run as their own.

As a mariadb user I hope they stick around for a while. Though migrating to something else like percona wouldn't be difficult.

MariaDB ditches products and staff in restructure, bags $26.5M loan to cushion fall

Nate Amsden

forced to explain to customers...

"Yeah, we realized a bit too late that cloud is really expensive and not worth it since we don't have unlimited money, and are unable to charge a fee that would allow us to operate that way and still have any customers".

Nice to see more people realize this, though perhaps too late for MariaDB time will tell. I've used MariaDB for several years after migrating from Percona after Percona jacked up their fees by about 10x many years ago out of nowhere. Though not in cloud, and not paying for any support. Seems to be a fine product but I'm not a DBA, though sometimes play one on TV.

EFF urges Chrome users to get out of the Privacy Sandbox

Nate Amsden

only a matter of time

before google removes the ability to disable it. Of course if you really cared about privacy then you wouldn't be using Chrome to begin with. Sadly most people don't care(even techies).

AMD's latest FPGA promises super low latency AI for Flash Boy traders

Nate Amsden

what are those ports?

looked them up myself

"The Alveo UL3524 card hosts four 8-lane small form-factor pluggable (QSFP-DD) connectors that are housed within a ganged 1x4 QSFP-DD cage assembly with heatsink. It can be populated with QSFP or QSFP-DD direct attach copper or optical modules supporting up to 7W. The QSFP-DD can connect interfaces up to 100G using optical modules or cables. A 161.1328125 MHz clock is provided to the QSFP-DD interface such that different Ethernet IP cores can be enabled."

https://docs.xilinx.com/r/en-US/ds1009-ul3524/Network-Interfaces

UTM: An Apple hypervisor with some unique extra abilities

Nate Amsden

reminds me of

Executor, still seems I have my copy of 2.0W from 1998. Seems it went open source at one point and is forked and still being developed

https://www.emaculation.com/doku.php/executor (history)

https://github.com/autc04/executor (current)

"Executor was a commercially available Mac emulator in the 90s - what makes it different from other Mac emulators is that it doesn't need a ROM file or any original Apple software to run; rather, it attempts to re-implement the classic Mac OS APIs, just as WINE does for Windows." (from the 2nd website)

Last time I tried bochs (directly anyway, unsure if anything I've used since has used bochs as backend) was 1998 too, it was cool to see but performance made it unusable for me at the time, fortunately VMware for linux came out a short time later..

Red Hat bins Bugzilla for RHEL issue tracking, jumps on Jira

Nate Amsden

been a long time

I was only ever really exposed to Bugzilla as a user once at one of my first jobs back around 2001. At least at that point it seemed like an unusable mess, no positive memories of it. First started using Jira in 2006(I think) and it was really nice for the basic things we needed it for (I'm in tech operations not software dev), though the developers used it too. Have been using Jira (and confluence) ever since across every job(the solutions were always in place before I was hired).

Though it is true that IMO the Jira/Confluence UI has gone crazy over the past 5-7 years(maybe more, I lost track), for me mostly negative feedback. At least they seemed to have stopped their original plans for trashing the "legacy editor" on confluence. For a couple of years they were really marching towards that saying everything had to be migrated, even when the new editor didn't work well(and still has some glaring issues today that remain unfixed especially with table auto sizing which worked fine by contrast on the old editor). My favorite version of confluence was I think v3 ? The last one where you could still edit the wiki markup.

At my last gig at one point the management tried pushing for this "kanban" board BS, which was just a waste of time and fortunately died on it's own pretty quickly.

I use Xwiki at home which is pretty good.