Comment Re:It's just a front end to LLVM (Score 1) 160
C++ encourages writing code that is slower.
C++ is often faster than C: https://programming-language-b...
Though certainly not universally so: https://benchmarksgame-team.pa...
C++ encourages writing code that is slower.
C++ is often faster than C: https://programming-language-b...
Though certainly not universally so: https://benchmarksgame-team.pa...
There are two conflicting definitions for AGI.
The older one is any AI that is not designed for one specific task, but can learn new tasks and transfer skills from old tasks to new tasks. We have had that for years, but that kind of AI does not use human language, and is routinely ignored in discussions about AGI. Arguably it is not AGI because it cannot do everything a human can do, but then again, neither can humans.
The Sam Altmann definition is "an AI that makes me a billion dollars". The only type of AI that can possibly match that definition is Charles Stross' "slow AI" that is profit-oriented corporations. And we have had that for centuries. Any other type of AI is not going to match that definition because, as Peter Thiel put it, "AI has no moat": As soon as OpenAI can do it, anyone can do it.
Since USENIX's inception in 1975, it has been a key gathering place for innovators in the advanced computing systems community. The early days of meetings evolved into the two annual conferences, the USENIX Summer and Winter Conferences, which in 1995 merged into the single Annual Technical Conference that has continued to evolve and serve thousands of our constituents for 30 years.
USENIX recognizes the pivotal role that USENIX ATC has played in the shaping of the Association itself as well as the lives and careers of its many attendees and members. We also realize that change is inevitable, and all good things must come to an end:
The last ATC will include both a celebration of USENIX's 50th anniversary on the evening of Monday, July 7, and a tribute to USENIX ATC on the evening of Tuesday, July 8.
the value of nuclear weapons is obvious
Hardware is more energy efficient than ever. And Software could make better use of it. In the trilemma of "good-cheap-fast, pick any two", good is always the one that doesn't get picked. That also means that correct, safe, secure, and resilient are at best afterthoughts.
Software optimisation is not the problem. Modern languages, compiled and interpreted, are doing a great job at optimising.
Monolithic designs are also not the answer; they are part of the problem. Intuitively one might expect that one big silo hiding all the complexity would be easier to optimise, but that is not the case: The complexity doesn't go away, and hiding the combinatorial explosion means you lose insight and maintainability.
Microservices are also not the answer. You want small, specialised tools that each do one thing, and do it well; and you want to build systems from those inherently parallel, scaleable tools. You don't want the overhead of long-running processes communicating over the network. You want small, short-lived processes running in parallel, managed by the operating system (or a VM efficiently using the IPC primitives of the OS). You want short, fast scripts using highly optimised specialised commands.
These are lessons that had to be learned over and over again. Artisans have learned them (good craftsmanship), mechanical engineers have learned them (respect the humble screw), electrical engineers have learned them (sockets and breadboards), and programmers have learned them (Unix philosophy).
For all the cruft in software, hardware also has room for improvement: Currently it is more cost effective to kluge together packages (and not in a modular way) than to design an elegant machine. We get processors that implement many ISAs in one and still stall for ALUs, while it requires legislation to make the battery replaceable.
But hardware has mass and volume, software has not. (Well, in a theoretical physics sense, it does, but practically that's immesureable.) So software expands to fill all available space and saturate every processor cycle; because empty RAM is wasted, and idling CPUs are just space heaters. And what used to be accomplished in a few kilobytes now takes gigabytes, which take longer to load than the old solutions took to compute. What used to be a few lines of text is now still text, but spread over several files using different structuring syntaxes, compressed, indexed, and served by a daemon using no structured query language.
Because if things were easy to read and easy to edit and easy to process, it wouldn't feel technical. It would feel like anyone could do it, it would fell like magic. At the same time, no thought is wasted on grokking the magic, because the IDE, the language server, and the LLM tell you how to work around the limitations that prevent you from making mistakes, saving you the trouble of understanding the theory.
Anyone can add complexity, and anyone will. Keeping things simple is the real skill.
And we don't keep things simple by ignoring complexity, one way or another.
But modular designs make innovation easier.
Are you sure something can orbit the Earth at a constant 52 degree latitude relative to Earth at all times?
Are you sure that is what I said?
If it was in polar orbit.
As it is, it orbits at an inclination of 52 degrees, and between those latitudes the ratio of land to ocean is higher.
It does not, it uses a battery.
Besides, the Soviets preferred solar panels over plutonium for their space probes. RTGs make sense in the outer Solar system where the heat is useful to keep the electronics at operating temperature. Venus is not that cold.
How does a thing that isn't water, 'water in the ocean'?
By not being made of water.
It can't land where there is no land, can it.
Are you going to ask next how a thing that isn't land can land on land?
Why do you think they can't use the radios on the Shuttle during re-entry?
I think that's because the Shuttle has been decommissioned in 2011.
The heat shield of the shuttle could withstand a temperate of up to 1533 Kelvin. That's a lot less that 3200 Kelvin. No wonder it exploded in 2003.
To be extra clear here, the shuttle's heat shield was made of silicium dioxide, and the melting point of silicium is 1683 Kelvin.
Death Probe, part 1 (S4E13), and part 2 (S4E14).
Possibly inspired by part of the Kosmos 482 probe crash-landing in New Zealand five years earlier.
But it is not clear why a Venera probe would carry weapons, much less use them.
Best I can find is this:
https://www.thelancet.com/jour...
It is not just grades that aren't impacted.
Right, I meant that odds are 11 to 9. So it's an 11/20 chance.
Nuclear power is the single most expensive form of energy. Nobody needs to oppose it, it is simply not economically viable. No nuclear power plant has ever made its money back.
Solar and wind are the cheapest and cleanest forms of energy, and those are being opposed, actively, by the oil lobby and the nuclear lobby. Yet they keep succeeding, because they are the cheapest and safest.
Why cheap, clean power is being opposed is no mystery.
It's not a moral panic, there is actual evidence that a general cell phone ban does help. A lot.
I've only found evidence to the contrary so far.
53 years. And it was built to land on Venus, so it is almost certainly going to survive re-entry. There is a ~11/9 chance that it will water in the ocean, and it might survive that, too. Well, as much as a dead probe can survive, anyway. The parachute will probably not open, and most of the ocean is deep enough that the probe might not withstand the pressure.
This is exciting. Maybe they will collect it and display it in a museum? If they can find it. It is practically lost technology: When was the last time a probe was sent to land on Venus?
Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (7) Well, it's an excellent idea, but it would make the compilers too hard to write.