Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror

Comment Re: Now do the US Military (Score 1) 226

The specifics of the market also means there are a lot of very bad incentives for the insurance companies to use pre-existing conditions or trying to deny coverage after the fact, and very strange and weird price/discount structures.

There's also perverse incentives for regulated but private industry to increase costs as long as they all do it. When your profits are locked at a certain percentage of costs, to make more money you need to increase costs.

Comment Re: So that's not at all how science works (Score 2) 77

You want to know what happened to science? What is a woman then, rsilvergun?

This just proves his point. It's a right wing talking point to try and confuse people about definitions. Scientists try to understand how things work and often define terms to help explain those workings. Sure they often reuse existing terms instead of making up new words, but they also give a new precise meaning. If the theory is successful enough, that definition might even get added to the many definitions already in the dictionary.

So of course, from a scientific perspective, defining a woman is not an easy question. Like many common words, it's not precisely defined and depends on context. A scientist would need to stipulate a definition if they want to say anything meaningful about the subject.

Comment Re:In other news (Score 1) 181

I would be nice if the actual article was available. Historically much of this research is sketchy. Sure they can see if an accident victim had THC in their system and then declare that as causal to their death, but it doesn't mean it's true. At a minimum, one needs to compare to the base rate of accidents for those demographics. It might turn out that THC lowers average fatality rate since it causes users to drive more slowly.

Comment Re:Garbage in, garbage out. (Score 1) 98

I think maybe the big mistake that's been made is imagining that intelligence is just one model.

Minsky's society of mind was published in 1986. Ironically, he's often blamed for slowing down NN research with his previous XOR result. More recently, we have mixture of experts LLM models which can be considered more than one model where a gating mechanism is learned to determine which experts to use.

there's some interesting new work on the left and right hemispheres as a whole representing two entirely different modes of attention -- ways of attending to the world.

Is this the Iain McGilchrist work? The original split brain work is fascinating but often ignored.

Maybe these models are breaking down because they're trying to bring together too many disparate things and they lose structure because there is no one structure which can do them all.

Perhaps. It might also help to have a more sophisticated reward system that better uses feedback to continuously learn. These models probably don't have any notion of a simple fact that can be verified. A human might speculate/bullshit on lots of things, but knows that some things are simple, known facts that can't just be made up. They know that they should look them up, and if they do it enough they will eventually learn the fact.

Comment Re:Mostly useless for normal users (Score 1) 65

It's tempting to get a mac studio with 128 GB. It's less than the price of a single 32 GB 5090. I guess the real competitor is the cloud. API token access is pretty cheap, and it's pay as you use. I'd need a serious use to justify $3500. Probably only makes sense if I need the mac for something else.

Comment Re:Ha Ha HA HA HA! (Score 1) 153

If I were to tell you that a shoe had 5 billion atoms in it - would you think that is a big shoe or a small shoe? You do not know because you do not know how many atoms are in any shoe.

Small shoe. Maybe a protozoan could wear that shoe. As a rule of thumb, don't forget Avogadro constant which is roughly 6*10^23. That number of water molecules is about 18 grams, so that shoe would be around 1.5*10^-13 grams.

Comment Re:I'm Not Surprised (Score 1) 121

This is exactly what anyone the least bit conversant in machine learning could have told you.

Yes it is a story that many with the least bit of knowledge have latched on to. Doesn't mean it's correct.

As for the academics and smaller industry players, I'm sure they are not happy about being locked out of the "real" research. It takes millions of dollars in electricity to do full training experiments which is only feasible for a small number of companies. What's left is mostly playing games with prompts which, while often effective, could be done by high-school students.

At this point, it's unclear if they've hit a significant wall. Obviously they are doing a lot of research. Unfortunately much of it is not public. Time will tell. Personally, I'm supportive of the more academic focus on getting a better understanding of how these models learn, but I'm not optimistic. Even the theory for much simpler models is not very useful.

Comment Re:Typical, do a bunch of worthless studies (Score 1) 77

Herbal supplements don't work. It's been studied to death, but people like you ignore the studies because you want to believe.

There are lots of studies that show they work. Many are quite old. https://pmc.ncbi.nlm.nih.gov/a...

Studies are ongoing every year so your comment that no one has looked for evidence for decades is a straight up lie.

The problem is that there are no studies at the level that would be required for FDA approval. Of course, who would be stupid enough to pay billions for such a study on a drug they can't patent.

Comment Re:Useful != intelligent LLMs only mimic correctne (Score 1) 86

It doesn't know that when writing code in a language, it should compile it to see if it works.

I assume that it can't compile the code because you're using an older version. Modern versions have access to tool calls that can do things like compile code to see errors and check outputs. It can then generate a new version until it gets it "right" or gives up.

Personally I don't think it's bad that the first attempt has errors. Most humans also need to iterate in this way. For complex problems humans can take months/years to figure things out as they try different things, research related problems, and solve simpler forms.

However, this is also one of the LLMs biggest weaknesses. It doesn't "learn" when solving problems, so current architectures can't make long term progress. After it's context window fills up (and probably even before that), it loses knowledge of what it's done.

Slashdot Top Deals

The reason why worry kills more people than work is that more people worry than work.

Working...
OSZAR »