Re: Historical record
Good point about the southern hemisphere.
4249 publicly visible posts • joined 24 Apr 2007
"NGC 6537 looks very like a cross - and many other planetary nebula exhibit the hourglass appearance that could be interpreted as a cross. The helix nebula is the size of the full moon and 2.5 light years across and may be a candidate - not sure how fast its expanding!
A nearby one may have dispersed by now, or possibly have blown itself away with a greater explosion later on - maybe the crab nebula was planetary before it went supernova?"
The Red Spider (NGC 6537) is pretty unique in shape, and at 1.5 arcminutes is not very big (and would have been smaller in the past). The hourglass-type side-on nebulae like the Dumbbell are not that cruciform, and though the Helix is the size of the full moon, it is very difficult to spot, even through my 8" scope, as its surface brightness is very low. This is a key problem: high surface brightness planetaries are small, one big enough to resolve by eye have very low surface brightness.
If the white dwarf at the centre of a planetary is part of a binary (spotted one such system last year), it could go supernova (Type Ia), otherwise this is unlikely (not enough mass). There is no indication that the Crab pulsar is part of a binary, I think. There may be a supernova remnant as yet undiscovered, of course. Supernovae embedded in a star forming region could generate strange light echoes on the surrounding dust and gas.
Finally, the "red crucifix" in the sky may have been atmospheric, rather than deep sky. A curious illumination of clouds after sunset, noctilucent clouds in a strange formation, auroras, or a bright meteor which exploded (and form cruciform patterns). And finally, if your king or local lord had stated he saw a red cross in the sky (after imbibing some bad mead, maybe) stating you could not see it might be a terminal career move ;-)
"A planetary nebula could easily be mistaken for a crucifix so I guess a nearby one would easily cover all the requirements. We just need to find the culprit."
Not the planetary nebulae I know. Besides, they are not large enough to be resolved by the naked eye. Planetary nebulae are not supernova remnants, but form when a star roughly the size of the sun blasts off its outer layers. However, let us not forget that the 1054 supernova was seen in the east, but not by western observers (too busy bashing each other's brains in ?). I do not know of a nearby SN-remnant which could be a candidate. A gamma-ray burst may be the culprit, as others have noted.
As this event is not so much impossible, but very, very improbable, I suggest the Heart of Gold is to blame
because it runs my code faster. On 8 or 16 cores I also tend to get a slightly better load balance than on 6 and 12, because the binary tree structure used in the gather phase of many algorithms is nicely balanced. Furthermore, hyperthreading is great mainly if the different threads share a lot of the data they work on, so the you do not get cache contention issues. In my code I find it does not contribute anything, and can actually harm performance.
The same does not hold for a lot of code out there. Horses for courses. For my desktop, the AMD chip is best (but not with a AMD/Radeon graphics board, because we also use CUDA), others may be served better with Intel chips.
Neat machine, but ultimately beyond my means. A year later I was coding image processing software for a living on an 8 MHz 80286 with 640 MB RAM, and a Matrox PIP1024 image capture and processing board, which had a whole 1MB of RAM. I yearned for the vast 4MB RAM of the A440.
"As a (senior) lecturer, you probably have no concept of how unabidably crap the average corporate trainer is. It's a certificate culture out there, and the delegates (I won't demean the term "student" by using it here) are expected to sit, listen, maybe "brainstorm" a bit, then walk away with a piece of paper."
Actually, a colleague had to follow an "Academic Leadership" training given by a corporate trainer. His description was telling. Unabidable crap is a fitting designation. The trainer asked questions like: "what would you do if a PhD student turns up at 9:30 each morning?"
Answer by (experienced) trainees varied from: "Nothing, as long as he gets his work done," to "Commend him for consistently arriving before the head of the department."
These were not the right answers according to the trainer (who clearly had no concept of an academic working environment). He honestly expected people to work regular 9-5 shifts. When criticised that this was not how we work, and that many PhD students work say 10 am to 8 pm shifts r longer, he stated this was no way to run a lab. He was questioned whether he had ever run a research department, he had to admit this was not the case, but he stuck to his guns that he knew how it should be run.
My colleague and all other trainees considered the course a complete waste of time, but you had to get the certificate for the new tenure track system. I gather they have now got rid of this course.
"Either the team responsible for this cock-up didn't attend - or those teaching the courses need to be fired."
As a (senior) lecturer, I cannot accept responsibility for all cock-ups my students make after following or even passing my course. The people teaching the course in source attribution may have been sterling, but let us not forget a student's ability to forget, misconstrue, or otherwise garble any information or skills imparted to them. I have seen all too often that students learn things only to the level to pass the exam, and then get totally plastered to ensure they erase that section of memory as effectively as possible. Fortunately, there are also many students who really want to learn and work hard at it. I have long ago decided to focus my efforts on the latter class, and lose no sleep over the former. After all, they are grown ups, they are responsible.
Many image search tools exist, they should have been able to find the source. If anyone is to be fired, fire those responsible.
You are assuming the lawyer gets paid to help you, which is not how many lawyers see it. They get paid so they get richer. As the lawsuit is delayed, they charge more. The same holds for the lawyers of UPS. What incentive do they have to get things over with quickly if they get paid by the hour?
Me, cynical?
On the other hand, I would not expect computer to drive more dangerously than a large percentage of drivers (word used without prejudice) in Crete or Cyprus. Driving there was, let us say, an interesting experience, after which a quick bout of dodging charging bulls seems like a pick-nick.
I will wait to see the scientific paper on this. A problem with these tests is that it mixes hardware and software performance measurement. Gaining speed by increasing communication bandwidth (and decreasing latency, for preference) just get the "duh" response it deserves.
The only ways to see if two algorithms differ is to (i) do a proper complexity analysis (computing time and memory/bandwidth use) to see how it should scale theoretically (both in terms of data size and number of processors), and (ii) time optimized versions on the same hardware (or different sets of hardware), using a variable number of processors or nodes.
Oh really. I do not need spectacular visual effects on my desktop, unless I am playing a computer game, or running scientific visualization software. My OS should not try to dazzle me, I need to get work done. The best OS is the one you hardly notice. This may involve smart use of visual effects. Some visual effects I find useful (compiz-fusion has some things I find very handy, in particular in the area of switching desktops and looking for the right open app), but most are just battery-draining eye candy. It is telling that the spectacular visual effects are mentioned before the streamlined navigation (which is useful). I want substance, not bling.
I agree up to a point. Languages do need to change, and they are in fact changing. OpemMP is a sort of "bolt-on" solution for C(++) which allows the compiler to treat for loops as for-all statements, and provides various other mechanisms for syncing. A functional approach such as in Erlang is often proposed. I do have some doubts that we can solve all sorts of problems merely with new languages. We need to learn new ways of thinking about these problems. A good language can inspire new ways of thinking, of course.
I have to disagree a bit here. Parallel computing is great, but at the same time it is hard work, and it is only useful in particular, data and compute intensive tasks. Memory access bottlenecks have been reduced greatly by getting rid of the front-side bus (guess why Opterons are so popular in HPC), but they are still very much present in GPUs, in particular in communication between GPU and main memory. There are improvements in tooling, but they are too often over-hyped. Besides, as with all optimization, you need understanding of the hardware.
Parallel computing is at the forefront of computer science research, and new (wait-free) algorithms are being published in scientific journals, as are improvements in compilers, languages and other tools.
Throughout its early history, physics simulation with its emphasis on matrix-vector work dominated the HPC field. Now a much larger variety of code is being parallelized. People are finding out the hard way that parallel algorithm design is a lot harder than sequential programming.
As I like to tell my students: parallel computing provides much faster ways of making your program crash.
Let me guess, they can easily parallelize adding two arrays together, or doing matrix vector stuff optimally. This covers some very important bases, but some parallel code needs to be rethought rather than just recompiled when porting to a very different architecture.
We have code which does not use matrix-vector stuff, and works best (40x speed up on 64 cores) on fairly coarse grained, shared memory, parallel architectures. We still have not managed to make a distributed memory version (working on it), and are struggling with an OpenCL version for GPUs (working on it with GPU gurus).
Every time I have heard people claim to have tools that take all the hard work out of parallel programming, they show me examples like "add these 10^9 numbers to another bunch of 10^9 numbers". These tools can indeed take a lot of the hard work out of parallel computing, but not all, by quite a long way.
Not sure about that. quite a few people (myself included) drop the default browser in Android for something with more functionality. My HTC Desire's default browser had no tabs, I tried firefox on Android briefly but was not impressed, and run Dolphin now. There may be better browsers out there for Android, but I rather like Dolphin, so won't change now.