Re: Not invented here
looked for this specific comment (or one like it), the first thing that popped into my head, upvoted
10283 publicly visible posts • joined 1 May 2015
well, when I was in the Navy I went to a soldering school where they taught soldering techniques according to NASA specs. You had to visually QA your work under a lighted magnifier, no 'pits' no 'tits' etc.. You had to use solder with mildly activated flux that was 63:37 (Pb:Sn) which has the tightest thermal range for the plastic region [meaning that you're way less likely to have a cold solder joint] and you used specific types of crimp connections when making cables, crimped with the correct tool, through-hole leads were bent in specific ways before inserting into the circuit board, yotta yotta.
In short, it restricts you procedurally to constructing and repairing things in a manner that gets the highest possible reliability. In this case, it was for equipment used to control nuclear reactors, which had to stay running on a warship that might get hit by things that explode and send massive vibrations throughout the ship, potentially causing electronics to 'jitter' or even fail. The last thing you want is to have a nearby depth charge cause a reactor safety shutdown, leaving you without propulsion for a short period of time, and then NOT being able to start it back up again because it keeps shutting down when the system is shocked.
anyway, similar requirements here for handling in-flight vibration and supersonic shock waves.
Hard to use and the developers are more interested in adding new features than improving usability.
I'm curious, would you expect a "modern" (*cough* *cough* *cough*) interface to be migrated to instead of adding NEW features [and NOT breaking the old ones nor re-inventing the UI in the process] ???
Isn't the entire point of software DEVELOPMENT to a) improve existing features by making them work more efficiently or enhancing their abilities and b) do so without breaking what people are already doing with it?
And I say this with the full snark that's due for ALL of those projects whose managers *FEEL* as if they have to completely (and capriciously) re-invent the user interface to comply with whatever "So and So" is doing to THEIRS... or maybe just whatever's trending at the moment on some social media platform.
So, my love of gimp INCLUDES its having kept that very stable familiar user interface every time I update it, which is still pretty much the same thing I've grown used to over the years, quirks and all. I suppose there are similar graphic editors out there that could be considered "even more complex", like Blender...
but that's the point - they found something that worked, and aren't "scampering about" trying to re-invent something that works, instead they are focusing their LIMITED available development time on things that ACTUALLY IMPROVE it.
there is a real need out there that is being served.
Not only that, but there have not been any *RADICAL* *RE-DEFINITIONS* of the existing interface, either!!! (and if there were, I'd consider FORKING it to resist any "change for the sake of change" like we see in OTHER things... mutter mutter mutter)
Once you learn how it works, gimp is an 'easy to hack with' way of editing graphical things. I've created lots of useful work-related and fun-related graphics with it. Photo editing is easy, and the 'rope select' lets you chop off sections and move them, etc. even things like pasting a head from one photo onto a different body for laughs, easily re-adjusting for size and so on.
Worthy of mention, the stretch/perspective tools are pretty intuitive, so you can do the head-pasting trick, or take a blank wall or computer screen or white board that's at an angle in the photo, and then paste text and/or graphics on it, like with a funny 'meme' thing, and do so easily and with good quality results (as in 'it looks believable' if you do it right).
There are a few things that Windows does more easily, and I'd used the old MS Office photo editor before. The older MS Paint from before the "ribbon" appeared has a few features like making circles and rectangles and filling them with a color, for example, as well as bezier curves, and stuff like that. However, I usually don't do "those things" and maybe there's a "Script Fu" thing written out there that WOULD do them... [I've found a few hacks for the circles and rectangles already]
All in all, 'gimp' is a VERY useful application! I don't know of anything better in the OSS world.
Linux hardware requirements these days are not too dissimilar from those requirements
NOT true... I run Linux on older hardware quite a bit, actually. I still have an old Laptop (from 2003-ish) that has Linux on it, an older debian release, that I used for consulting a year and a half ago when I needed a Linux box and they didn't have one available for me. All I needed was ssh and some file and network utilities to talk to a couple of RPi devices, but windows is _SO_ pathetic for that, so I used a 2003-ish laptop with only 512Mb of RAM and a relatively TINY hard drive as a PRODUCTIVITY booster.
A week later they handed me a year-old CPU box that I took home and put Linux on, overnight. No problems since.
Now I _WILL_ admit that *GLUTTONOUS* *PIG* applications like Chromium and Firefox [which manage memory about as well as an alcoholic manages booze] won't run very well on that old laptop, since they're so "modern" and all, but pretty much everything ELSE [including gimp] seem to work JUST FINE on a system with less than 1Gb of RAM, running X on a 1024x768 laptop screen, with a debian release that barely pre-dates its inclusion of systemd.
Then wonder how a company with coders like this came up with IE, Teams and ShitePoint.
I've wondered how a company that came up with the 3D Skeuomorphic look for Windows 3.x _AND_ OS/2 PM 1.2 (or was it 1.3?) could *SUDDENLY* *ABANDON* *IT* *ALL* and *GO* *BACK* to a 2D FLATTY FLATSO look that Windows 1.x and 2.x had... _ESPECIALLY_ since the reason Windows 3.x _WAS_ _SO_ _SUCCESSFUL_ is _EASILY_ explained by that very same 3D Skeuomorphic look that they *ABANDONED* in Windows 8 through 10 !!!
So yeah, same *kind* of frustration, more or less.
There's already an alternative for Twitter (Parler)
There's probably going to be an alternative for Facebook if anyone wants it [I heard about one recently, could not remember the name]
There may already be an alternative for youtube.
It may simply be time to ABANDON the arrogant google+facebook+twitter monopoly and go elsewhere until they start treating customers like CUSTOMERS instead of "revenue generating units"...
how they're going to prevent this.
don't log any personally identifying information - that'd be a good start. then assume that a single application's usage information might get duplicate entries, so some other means of transactioning (besides personally identifying info) would be needed. A reasonably long hash based on a user's info might do it. THAT kind of thing.
Nione of this is hard, it just means setting it up so that the aggregate data can NOT be tied back to whoever or whatever generated it.
Now I have to ask the OTHER data slurpers why THEY aren't doing it THIS way... since it IS _SO_ SIMPLE!!!
And since Puerto Rico is an island in the Caribbean it stands to reason the telescope cables were exposed to a LOT of salt spray.
From the article: An official investigation into what caused the cables to break away was launched in August,
I can already tell you what did it: chloride embrittlement and corrosion in general, combined with cyclic stress fatigue with continuous tensile stress. *SNAP*
Certain kinds of stainless steel are affected by chloride embrittlement, and though you may not see corrosion, you might STILL see pitting on the metal, and with constant tensile stress it causes microfractures into which the chlorides (from salty air and hurricanes and stuff like that) embrittle it (if I remember correctly). This is also somewhat the case for copper conductors and "just plain steel". Salty air is bad for them, yeah.
You would generally need to paint and/or coat all cables with some kind of anti-corrosion paint (or other coating) that typically has chromates or some similar material in it, and possibly use sacrificial anodes [if even possible] to limit galvanic corrosion. As far as chloride stress corrosion goes, you might not be able to stop it if the material is susceptible.
Also 'work hardening' due to cyclic stresses can also result in cracking and even total failure. 50-something years of hanging there through hurricanes might explain that, yeah. If you bend soft metal back/forth enough times, it cracks and breaks. Same idea.
And when a stress crack forms, the physical properties of it (along with salty air) form corrosion that just makes it worse.
if the information is filed in the court case, then the defendent receives a copy. However, they are NOT privy to everything the Feds have if they do not actually FILE it. That's my understanding of the evidence rules. Should the judge decide otherwise, it would become a motion to compel or similar, in which the government would have to produce requested documents as a part of the discovery process. But the government would be able to object to this, and the judge would then decide whether the evidence would have to be disclosed (or not).
Basically, if it's not filed, it's not part of the case. Anything beyond that would require a subpoena and other legal processes to get it. But IANAL either, I've just had to deal with all of this sort of thing before. Laws vary from jurisdiction to jurisdiction, but the general idea is that once it's filed as part of the case, BOTH parties must have full disclosure with one another. But if not filed, no need to disclose it. I'm sure the DEFENSE doesn't want to disclose everything THEY have, now would they?
as a musician, I object to RIAA's monopolistic dominance in music distribution, their marketing and royalty policies that harm musicians by promoting CRAP at the expense of royalties to GOOD musicians, and their general attitude towards their CUSTOMERS (content consumers). What they did to used CD purchases, for example, should be CRIMINAL.
And worthy of mention, RIAA hall of shame:
* Smashing Pumpkins
* Prince (who had to change his name for a while)
* The Beatles (why they formed Apple and couldn't perform old hits)
* Salt-N-Pepa
(and MANY, many others who've had horrible contracts, even leading to bankruptcy)
Hopefully RIAA will sod off for a bit
unlikely. they seem to act as if every web site is Napster, and every content consumer is ripping them off or something.
I have one suggestion to RIAA: stop force-marketing CRAP at inflated prices, and if it's good, affordable, AND available for purchase, people WILL pay for it. Suing everyone for revenue is a BAD model, kinda like patent trolling...
but.. but... but... KDE is NOW all 2D FLATTY FLATSO, at least from what I have seen.
At least Mate (based on GTK 2 last I checked) CAN have *Nice* 3D Skeuomorphic and colors, rather than the 2D FLATTY of Chromium, Australis, and Win-10-nic. Similarly, Cinnamon (which, as I understand, uses GTK 3).
Whatever Gnome is doing nowadays, Pfftt.
I use the '?:' syntax frequently enough. If you put them into the arguments passed to 'printf()' you can adjust output based on a value, like:
printf("You answered %s\n", (const char *)(bYesNo != 0 ? "Yes" : "No"));
(but you kinda need to force some parens and type casting in there so certain compilers won't gripe nor get it wrong)
when working with a microcontroller, having ONLY A SINGLE FORMAT STRING for multiple possible outputs saves NVRAM. But of course, YMMV and you probably want to test all possible refactor possibilities before settling for the one with the smallest footprint.
The biggest source of bugs in today's bug ridden software is total reliance on the high level of abstraction imposed by most currently used languages.
'Sauce' on that? Not disagreeing, just curious...
I wouldn't necessarily use 'source of bugs' but rather "source of inefficiencies and bad coding practices".
After having to clean up a DJango web service, by having it invoke some C language utilities to do the REAL work (and getting 10:1 performance benefits as a result, saving MINUTES of 'wait for it to finish' data transfer time for end-users), I can surely sympathize.
Python is magnitudes faster to build clean code that works.
depends on what you're doing. Although I've written sample code in python for things _LIKE_ serial comms demo programs that control a device via USB serial, mostly from the likelihood that doing so would be understandable by the intended audience, I wouldn't use that language to develop anything BEYOND a simple demo, MOSTLY because the handling of binary structures and binary data in general is PATHETIC in Python.
Additionally, the handling of arrays (in general) is ALSO PATHETIC in Python. I suppose the need for binary structure packing as well as array'd data (in general) is kinda the same basic problem...
"Clean code that works" - if your "clean code" is "slap these pre-written 3rd party objects together with 'glue' to become 'an app'", then maybe. As for me I'd rather have COMPLETE CONTROL over EVERY aspect of the code, particularly for high reliability things. No 'midnight phone call' or 'angry customer' for anything _I_ write. (it's also why I like to statically link the installed binaries, even for Linux)
realistically, all dating methods include assumptions. Using tree ring C14 variations to help calibrate the others is only going to make it more accurate, when results from various methods converge in a repeatable manner [once calibrated correctly].
Currently the 'isotopic decay' methods are all subject to various errors, centering around "what is the amount of material at the beginning of the decay chain". Until we have a way of projecting that backwards that is accurately determined based on converging "date determination" results, we'll just have to assume it's all "just a guess".
There is a layer in the rocks, however, at the end of the cretaceous period if I remember correctly (verified, K-T boundary), that is said to mark the period where the meteor that allegedly killed the dinosaurs allegedly formed the gulf of Mexico and caused a multi-year darkening of the skies... so that event is probably in trees as well, fossilized ones at any rate. Apparently there are materials, isotopes, and structures in that layer that coroborate something _LIKE_ a meteor strike happening and causing a world-wide catastrophe.
In any case, corroborating tree ring info in fossil trees might be a cool thing to find. OK 60-something million year old tree fossils, but still... who knows what we might find if we look for it?
13.000 year old tree rings...
fossils maybe? [it could happen...]
This topic also implies the use of various dating methods that include carbon 14 dating. And using tree rings to catalog the variations in C14 for a given year just might help calibrate this dating method in ways that greatly improves its accuracy by (essentially) data modeling the "chaotic" component and creating 'fingerprints' (of sorts) for a particular group of years.
Also reminds me of that Dr. Who episode where the earth was covered in forest for a day or so...
From the article: Outbound domain transfers do seem to be working
a feature that is, no doubt, being used quite a bit...
I don't have either of the registrars that I use managing anything but the registered domain. The actual DNS is either done by the ISP providing the web site stuff, or by me. Bind is NOT hard.
rndc freeze
delete the transaction file for the zone
edit the zone file using your favorite text editor (pluma, ee, nano, whatever)
rndc thaw
no problem
(also need to set up the DHCP hooks for on-demand naming by DHCP clients, but that's not hard either, using isc's DHCP server)
as for the registered domain name, no problems with either verisign or godaddy. Ok maybe I pay more, but sometimes you get what you pay for.
more than likely they'll move OUT of San Francisco within the year.
I hear other staes like Florida and Texas have corporate friendliness, and MUCH lower cost of living.
Think about it: if you're NOT being taxed at that 'progressive' (read: punitive) rate and your cost of living is HALF of what it was, you could literally AFFORD to be paid 1/3 or even 1/4 of what you WERE being paid, and still live JUST as well as you did before (possibly BETTER). Just convert somve of that wage pay to stock options or 401k or something ELSE that's not immediately taxable, and move to one of THOSE places, and I bet you'll see corporate bottom lines improve AND the CEOs be just as "wealthy" in their lifestyle.
Or you can pay all of your wages in tax, demand a compensating raise, and watch employees get laid off, just for the "privilege" of a San Francisco corporate address...
WARNING: do not execute these commands until you understand what they do...
rm -rf ~/.cache/chromium/Default/*
rm -rf ~/.config/chromium/*
This also deletes preferences. you may still find it useful. If you want to KEEP your preferences, then you should do this first, then reopen chrome, set your preferences, and check the date/times on the files to see which ones you should keep
Just deleting the cache is not enough. SOME things, like those cookies you mention, reside "ELSEWHERE". You need to figure out which files/dirs CAN be removed, and remove them. I use a script with 30+ lines in it. Rather than post here, you can probably figure out which files need keeping, and delete the rest. No harm to chrome, it just returns to default if you delete a config file.
the Google monster is having trouble tracking me with my choice of VPN, browser and add-ons.
It's been my observation that when I do the following:
* use chromium
* erase ALL history in an anal retentive way before I go someplace that uses CAPTCHA
The CAPTCHA puzzles are SIMPLER, and more frequently I just get a checkbox.
Thought I might mention that...
In most cases, I only have to click a checkbox marked I am not a robot;
My experience has demonstrated that CAPTCHA does NOT like you to use non-chromium browsers.
* Nearly every time I use firefox, I have trouble with as many as half of the captchas
* Specifically, the captchas that involve a slow fade-out and slow-fade in of a picture you might need to click because of an object on it *ALWAYS* *FAIL* whenever I use Firefox. This has been going on (sometimes worse, sometimes just bad) for ABOUT A YEAR now.
* There does NOT seem to be any kind of reasonable feedback or customer service contact on this [I have tried, in one case an e-mail address]
* Using FreeBSD and/or Firefox should _NOT_ make you "suspect" nor cause you to get "nothing but the hard ones that require a screen magnifier to solve and involve more than 2 'next/verify' buttons"
* NOT having "the most bleeding edge browser" should _NOT_ make you "suspect" [and it should STILL WORK PROPERLY]
* rejecting 3rd party cookies or setting 'private browsing mode' should NOT make you "suspect"
I have seen government services for the state of California actually USE CAPTCHA which is EXTREMELY ANNOYING because of this sort of thing.
I have also been waiting for a chance to RANT about this where someone else might actually CONFIRM it independently. I do frequent a particular web site that uses captcha to control user posts to (try to) prevent abuse. So I'm basically FORCED to USE CHROMIUM for this.
(but I have great hacks in place to prevent everything I do from being tracked by "that one browser" that doesn't have noscript or cookie blocking or private browsing or any OTHER mitigating 'thing' and I use a script to DUMP ALL HISTORY which is comprehensive and seems to work just fine...)
Shirley wget or curl already does a more than adequate job?
youtube-dl de-obfuscates the actual video URLs, downloads higher resolution video than your connection might be capable of live-streaming, AND creates a convenient '.mkv' out of it all [in most cases]. It ALSO does NOT require a java player to view the videos, so for SECURITY, it is MORE than ideal.
The problem is that youtube and the other video sites are busy TRYING TO OUTWIT IT, by changing things frequently, like a moving target. Last week thd tool worked. Next week it MAY NOT. and GOOD LUCK finding out what URL the video is REALLY on without excess MANUAL EFFORT.
This is why the script is SO BRILLIANT and SO POPULAR.
Youtube should just have a "download" button anyway.
all we need is ANOTHER LOCATION for the youtube-dl code, maybe somewhere in Finland... or put the tarball on USENET
(why these cloudy repo sites would be so "essential" is beyond me)
And what's to stop OTHER web sites around the world from hosting a mirror? All youtube-dl makers (I use this script myself, so I don't have to watch skippy video) need to do is post a set of links and a set of hashes (along with file size) to verify the thing, and voila! It's not that large anyway (~1.7Mb uncompressed) and consists of a SINGLE python program.
So a paste to USENET might be the easiest way to do it. Now I'll need to subscribe to some of the 'binary' groups
what does use cryptography by default
probably anything involving crypto API, from web browsers and e-mail to domain logins and file sharing. But that's just a guess on my part.
Fair bet I won't get a patch for Windows 7 either. I'd like something I could manually install, please, rather than getting mandatory updates enabled via "windows update".
XWayland stacked on Wayland
How to actually make something like this WORK... (but will the alleged CHILDREN working on the Wayland project ever consider such a reasonable proposal from an OLD FART like me?)
* XServer uses X.org implementation "as-is" running like a subsystem layer [my read-up on XWayland just now suggests it is not really like this at all]
* XServer runs independently, NOT invoked by Wayland (so you can have it listen on a TCP socket, just like always, NOT like XWayland currently is, if I read it correctly)
* X11 Video driver and glx and shared memory extensions do the actual work, implemented by X.org as a video driver, and not necessarily a part of Wayland [but with X11 hooks in Wayland as needed]
* OpenGL lib similar to what NVidia does, that does more direct video things at a low level, so you get equal [and possibly better] performance with X11.
* XVideo hooks via the X11 video driver.
This would essentially be 'wayland as a soft video driver'. It would support remote X11, and existing X11 applications without modification. It would essentially use the SAME X.org server code.
But here's what it would NOT do: It would NOT force EVERYONE ELSE to CHANGE, thereby giving up tried/true for the NEW SHINY, sorta like as described in [mentioned again] Arthur C. Clarke's "Superiority".
Get rid of the ancient cruft and add new features, optimising as you go.
I'm not sure there's a LOT of "ancient cruft" there. Some of that still has use, especially for running a GUI application on an embedded system (let's say something in the 100's of Mhz clock frequency ballpark) across a network (let's say a WiFi G network with lots of interference) and still get reasonable UI performance in which the application is still "usable". not like you're playing videos or anything...
I like to think of that "ancient cruft" as "well tested bug-free code".
X11 has a LOT of legacy stuff in it, which I expect is NOT used all that often (except x sample applications like xclock etc.). And better docs for the more recent extensions would be nice, like glx, shared memory extensions, and so on...
Freetype and related stuff are actually add-on libraries.
THAT being said, it is NO excuse to go and ABANDON a working system for "something that more resembles the way Micros~1 does it" like WAYLAND, *ESPECIALLY* when you *LOSE* *CAPABILITIES* that are EXTREMELY IMPORTANT!!! You know, like the 'DISPLAY' environment variable and REMOTE X server support.
/me uses pluma on an RPi to edit files NEARLY on a daily basis for a major client of mine, using 'DISPLAY' to edit source files on a Linux workstation across the network, often WIRELESSLY. It's a touch screen system, like so many others out there i bet. NO WAY could I do editing on that touch screen, either, and it needs to have the application RUNNING ON THE TOUCH SCREEN to test the code I'm working on. I can't imagine the HORRIBLE INCONVENIENCE AND SLOWDOWN that a LACK of remote X11 capability would cause...
(yeah NOBODY's doing that remote X server thing... right? What, HALF the people who read El Reg do that at least OCCASIONALLY? That sounds about right)
X11 side - (for the clipboard)
Not that hard, and a bit more flexible than it is in the windows world. In theory, within the X11 world, you can store multiple data types on the clipboard (each with its own owner), and not "just one thing" (they don't even have to be for the same object or application, etc.). Additionally it's really not all that hard to make use of in the X11 world. Many clipboard APIs and libraries exist for just about every toolkit I'm aware of. It's more or less a "solved problem" with a lot of sample code available.
I use a background thread and some sync objects (in my own toolkit) to make it appear to work synchronously from the main thread. Copy/Pasta performance is pretty good. The UI is basically seamless, no stuttering or weirdness. code is simple (synchronous call similar to Win32 API in that regard) with the toolkit doing the async stuff in its own thread as needed.
There is a nice command line example 'xclip' that's not all that hard to follow, which lets you access the clipboard from the command line. So if you need clip functionality and are NOT using a toolkit (or are writing your own), this is likely to be very useful to you (it was for me, a long time ago):
https://github.com/astrand/xclip
Question: why abandon something that works and is in long term maintenance mode?
2nd Question: who here has NOT read Arthur C. Clarke's "Superiority" ?
I'm still not clear how this is supposed to be accomplished in the new spiffy Wayland-only world.
It's not. Wayland becomes a way of FORCING LINUX USERS INTO A MS WINDOWS MODEL, just like systemd and "other Poettering things"
And, according to the article, X.org is *HARDLY* ABANDONWARE. Last commit a few hours ago, right?
If they need more people to keep X.org alive, I'll volunteer MYSELF.
But... I will *NOT* be *FORCED* into *WAYLAND* and *LOSE* the *ONE* *BEST* *FEATURE* of X11 protocol, the use of the DISPLAY environment variable to run on a REMOTE desktop [or the same one, with a different user context].
that word - "modern". Hey RED HAT, I do NOT think that word means what YOU think it means!
When I read "an application that has been stable for years might now fall over" it just reminds me of the *KINDS* of unintended consequences these "developers" (with real-world blinders firmly in place) "feature creep" into things that have a large user base, only to "not give a crap" when things ULTIMATELY DO FALL OVER.
This is why I will *NEVER* do an application that depends on "the latest bleeding edge version" of something *LIKE* Node.js
We have seen this sort of thing from dependent libs at least a FEW times, particularly when developers capriciously "decide" to withdraw their content, out of spite or malicious intent, doesn't matter, same result.
The C language was originally set up to fail gracefully, which is a major reason why it's ideal for kernel development. In a typical kernel "thing" you already test for error conditions BEFORE you work with something. You force unused pointers and structures to a default state (typically 0 or NULL), and check for it at specific points where it matters. Errors become something you TEST for, and NOT something that CRASHES things.
Back in the day, the FRUSTRATION of a single line of code ending your run in "?Something error at line 500" is what (most likely) drove this feature in C. Also consider the "Unhandled Application Exception" box in Windows versions before 3.1 . That infamous UAE caused SO much frustration, it HAD to be dealt with.
Now Node.js has its OWN version of "Unhandled Application Exception" simply because what was once a warning that COULD BE IGNORED, is *NOW* a FATAL END OF THE UNIVERSE *ERROR*.
Thanks for that, Node.js "developers".
it's why back doors themselves should NEVER be used. Classic example here, in which OTHERS have discovered the keys, and the existence of the back doors has been revealed, defeating their very purpose and compromising EVERYTHING gummints were attempting to use them for.
They need to do REAL investigating. You know, like the OLD days.
Open source may provide a perfect solution to this. How about hardened linux router software that's 100% open source that you can subsequently load onto Cisco's hardware and thereby ELIMINATE the problem? Peer review would find any back doors. Maybe Linus could make it happen?