Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror

Comment Re:Influencers != reviewers (Score 4, Insightful) 66

If it's marked as sponsored content, then it's fine that they add conditions on how the device can be presented. Even if there is no money paid, these are expensive devices, so something of value was transferred.

If it's not marked as sponsored content, the device should be considered a gift and the receiver should be able to do with it as they see fit. I don't think it matters much whether the receiver is a reviewer or influencer.

It seems to me that Google is trying to have their cake and eat it: use influencers to reach potential customers in a way that feels more spontaneous and trustworthy than advertising, but without accepting that the company is then no longer in control of the message, even though the former is caused by the latter.

Comment Re:Mass Revolt by Senior Researchers? (Score 2) 49

"Move fast and break things" is a Silicon Valley motto. There is no legal issue as long as the investors are aware of what they're getting into. The problem with SBF was that he told customers that their money was safe at FTX, while in fact they were making risky investments with it at Alameda.

Comment Re:good for them (Score 2) 43

With the entire video team (*) following Nick after his departure, I think those higher-ups will soon realize they shot themselves in the foot with this decision. A brand does't have lasting value once whatever people liked about the brand is no longer there.

(*) TFA says "a number", but after hearing the names in the Second Wind announcement stream, I think it's pretty much everyone on the video team. In any case, no new videos have been posted on The Escapist's channel for a few days.

Comment Re:Link to the actual paper (Score 1) 78

None of that has any chance of happening in the near future.

Machine learning takes a huge amount of computation. In particular, while a larger capacity networks become more powerful, it requires exponentially larger networks. For example, Microsoft already admitted that while GPT-4 performs well, it is too computationally expensive to deploy at a large scale. Any AI with superhuman levels of intelligence would require so much compute power that it would be easy to detect and shut down: you could literally pull the plug on it. This might not be true forever, but it will take many advancements in both ML training and hardware to change this in any significant way.

The training advancements might be accelerated by the AI itself, but at the same time there will be diminishing returns on each advancement. There will be a limit on how much intelligence you can get from a certain amount of hardware and power. Hardware advances might stretch further, but are going to be much slower and much more dependent on human cooperation. So it's not likely that AI suddenly and unnoticed jumps to superintelligence.

Also I'm not certain we're all that close to AI actually becoming intelligent. While the improvements in image recognition and language processing are very impressive, AI's ability to reason is still very weak. An LLM can produce good prose, but if it's writing about two persons, it's very likely to mix them up, because it has no mental model of the world it is writing about.

The extinction of all Earth-born life sounds highly unlikely: if a super-intelligent AI has no sense of self-preservation, it would be easy to get rid of. If it does, it wouldn't eliminate humans when it depends on human activity to keep the infrastructure that hosts the AI running. By the time AI and robots no longer depend on humans at all, I'd argue that they have become a new life form.

Comment Re:Link to the actual paper (Score 1) 78

Thanks for digging that up. It's rather pointless to discuss the risks without actually naming those risks.

It seems to me that all of the risks they mention are things that humans are already doing but might be boosted by AI. Is AI really the problem here?

Economic inequality is a problem, but in our current economic system it's going to get worse over time whether there is AI or not. The cynic in me wonders whether slowing down AI is just a way to stretch the status quo a bit longer, slow-boiling us frogs.

The people deliberately spreading misinformation aren't going to held back by regulations. People who want to use AI to combat misinformation might be though. It reminds me of the crypto export bans, which were a huge hassle but didn't do much to make the world a safer place. In fact, protocols and systems from that era that are still in use put infrastructure at risk, so I'd argue that even on the safety front it was a net loss.

Intended or unintended biases leading to social injustices happens with statistical models as well, unless care is taken to avoid it. There might be more crime in a particular zip code, but that should not be a reason to automatically flag anyone living there as a potential criminal.

I think there is a solution: regulate the outcomes, not AI itself. Hold organizations responsible for the actions AI takes in their name, just like they are responsible for the actions of their human employees.

Comment Re:I have yet to see evidence of capabiility (Score 1) 78

LLMs are stupid in the sense that they are good at working with language but terrible at understanding the world. Which only makes sense, since they were trained on language. It's frankly amazing that being good at language yields correct results outside of the language domain as often as it does. But if your job is threatened by an algorithm that gets it right 70% of the time, your job wasn't very secure to begin with.

The comparison to Eliza is unfair though: Eliza only works with the information in the conversation, while LLMs can draw upon a huge amount of information lossily compressed inside the model. For example, Eliza cannot translate one language to another or generate code.

Comment Re:It's not even a quarter. (Score 1) 135

They're charging the developer $0.20 per install, but the developer will not get 100% of the sales price. First, the platform takes a cut, typically 30%. Then the publisher will take their cut, which can be around 50%, depending on the publishing agreement. For a $10 game, the developer might have to pay $0.20 per install from their $3.50 cut. And one sale might be installed to more than one device, for example if the customer has a desktop and a laptop, or something like the Steam Deck, or just buys a new PC every few years. And while 10% of developer revenue may be significant but still doable on the base game price, it doesn't leave much room to drop the price for sales or bundle deals.

Then there is also the problem that Unity doesn't have actual install numbers: they're using a proprietary model to estimate them and then plan to bill their customers based on those estimations. So they've got a strong incentive to estimate on the high side and the developer will either have to accept that they're probably overpaying or spend a lot of effort to prove Unity's numbers wrong.

Unity already said that they want to compensate ad-supported games, as long as they buy their ads via Unity.

I doubt that most truly shitty games are even profitable. The problem of shitty games isn't that they exist, but that they drown out better games. The solution is improving the discoverability of good games, not fewer games getting made.

Comment Re:Cash grab (Score 4, Informative) 135

It's not just games late in development: Unity claims that any sales from 2024 onward, even for games that were released earlier, will fall under the new license terms. And they will count installs before 2024 to determine whether the payment threshold has been reached.

I'm not sure if that will stand up to legal scrutiny: it is a big change to the original license terms (not just upping the rates, but replacing the entire model) and it contradicts earlier public statements made by the company. IANAL though. I guess they're hoping that they can dodge a class action and kept the high-volume cost low enough to make it unattractive for large developers to sue.

In the short term, the small but successful developers are screwed: they will go over the payment threshold, but do not rake in enough cash per player to be able to afford $0.20 per install. In the long term, Unity is screwed, as the trust is broken and developers that can afford to switch engines will likely do so, while developers that can't afford to switch might also not be able to pay up.

Comment Re:Cash grab (Score 1) 135

They provide the runtime: the engine implementation that ships as part of the game. However, while the runtime does cost money to develop, charging for it in this way does look like a cash grab: their customers use both the runtime and the development tools (not just one or the other), so why would they have to license them separately, other than to double dip?

Comment Re:Ambitious but interesting, imo (Score 1) 71

The users don't have to abandon Python. The libraries I mentioned currently ship wheels (Python packages) with platform-specific binaries in them. The code for those binaries could be written in Mojo, but the library interface would stick to the Python subset of Mojo. Maybe they would supply some optional Mojo-only modules, but I don't expect them to drop Python compatibility any time soon, if ever.

Comment Ambitious but interesting, imo (Score 4, Insightful) 71

There is an in-depth interview with Chris Lattner about Mojo on the Lex Fridman podcast.

My impression is that the first adopters would not be Python application programmers, but the people maintaining math-heavy libraries such as numpy, pandas and pytorch. Those libraries are currently mixes of Python and C++ and with Mojo it would be possible to write all of it in a single language without losing performance and perhaps even gaining some due to autotuning and wider accelerator support.

Of course, that all depends on Mojo delivering on its planned features before their funding runs out. The plans sound realistic enough to not dismiss them, but far from easy.

Slashdot Top Deals

I cannot conceive that anybody will require multiplications at the rate of 40,000 or even 4,000 per hour ... -- F. H. Wales (1936)

Working...
OSZAR »