> And yet some of us still manage to keep our operations running and malware free as far as we know
Fixed that for you.
The volume of malware threats is actually on the decline despite the increase in breaches, according to a study from Websense Security Labs. Websense Security Labs logged 3.96 billion security threats in 2014, which was 5.1 per cent less than 2013. Despite this, the number of high-profile breaches increased. Hackers have …
This post has been deleted by its author
This post has been deleted by its author
It depends on what you mean by secure :)
At an old workplace, they only discovered that their security had been compromised when their Asiatic rivals started applying for patents on the processes that they where at the final testing stages for. Some of the documents where pretty much word for word on the technical side.
So we had lots of backups, lots of security logs, but it turns out if a senior researcher uses the same credentials for 10+ years, and uses them on pretty much every site he can, then access is easy.
Oh, and same researcher would take work home on USB sticks, bring them back, and be shocked that they where full of malware. Turns out his home machine was XP SP2, no AV, firewall disabled.
Being able to restore your data is only one aspect of the issue. Making sure that data doesn't escape the confines of your compound is rapidly becoming a bigger issue. Especially now that business is mandating collecting *all* data into "Data Lake" models. Security of data and access controls make *everyone* in the industry twitch. Subverting the security model with "social engineering" is becoming easier rather than harder from what I can see.
And I'm still dealing with devs that come back with "lets chmod 777 that file....."
I usually get downvoted for this, but I still believe that the existence of "malicious URLs" is nothing more than the existence of unacceptable browser flaws. Visiting a web site is 'opening a document'; and it is my belief that it should not be possible for data to subvert the application used for viewing that data and it should definitely not be be possible to subvert the system beyond that application.
How you supposed to even know if a URL is malicious until you've clicked on it, especially if it is shortened? Or in a QR code? Sure you can say oh, never click on a shortened URL, never scan a QR code, but then you are missing out on large chunks of functionality.
depends on your paranoia. I have been using firefox profiles to put distance between activities.
If you are Mac/Linux/BSD you can created a "toxic" user and run the browser in there. Or even within a VNC session. Very little chance of it hurting the user/system then.
I do the same using chromium and chrome (used primarily for google products), but some websites only work with chrome* (wtf?).
I even have an ad-block free opera window to make sure sites that I like, that need to push ads...
So in general URL's are agnostic. But probably a good idea to have some differing user environments to match the viewing target...
P.
it is my belief that
it should not be possible for data to subvert the application used for viewing that data
it should definitely not be possible to subvert the system beyond that application
Those two goals sound like they are achievable.
The thing is, to achieve them, we need to work out how to write complex software that doesn't contain any flaws. No-ones done that yet.