Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror

Comment Tip of the iceberg for ghost people (Score 1) 129

I suspect this is only the very tip of an iceberg. Right now, it is not too hard to create AIs which can even do a video call. This will only get to the point where my own family and friends would not know it is me.

It definitely is past the point where a harried official "human checking" could be fooled.

Even here on forums like slashdot, reddit, etc, I suspect the majority of the comments will soon be bots arguing with each other; shilling, influencing, etc.

Using the tools available to me in 2025, I am certain I could create a person who would fool almost anyone doing any check that was online. I could create social media accounts which were consistent and filled with "real" postings; that is, pictures of the ghost in various locations around the world, hanging out, etc. Attending classes, and on and on.

Where this is going to get weird is as the tools become nailed down, individual people or organizations will be able to create whole populations of these for their various reasons. Had covid been a few years from now, I suspect that you could even proceed through an immigration process in many countries with this ghost, get an education, etc, and now the ghost would be a "real person"

I wonder how many ghosts are now working for call centers, etc where the call center thinks they have hired a real person?

Comment Re:AGPL (Score 1) 24

I call it assh*le GPL. I understand they are trying to keep the big cloud companies from basically stealing their work, but I am fairly certain they could come up with some licence which says, "If you make more than 1 billion in revenue, you can't use this without getting a commercial, license..."

This is something which kept me away from Ada as a language. Most of the libraries are gpl or worse. Even Ada itself has some weird licensing which is like, "It's like GPL but only more confusing. We promise that our lawyers won't haul you into court because we are so nice, and our lawyers only eat kittens on weekends."

For most things I want to see MIT, public domain, or Apache. Even the BSD license is a bit offputting as it shows the library or whatever is potentially old and being maintained by boomers.

Another one which is an instant rejection is java. I really hate anything that is even remotely in the orbit of anything JVM. I use jetbrains products, but check every few weeks to see if a competitor has built a far faster, lighter, and better IDE using rust or something.

Comment Redis people are arrogant (Score 1) 24

I have gone to more than one redis presentation. They were arrogant pr*cks. Usually, when I go to presentations given by many tech companies, they are jovial, friendly, and have great handouts. They will throw a high value licence out to the audience sort of thing, and maybe some kind of credit to their service with some heft.

The redis people left me with a real sour taste in my mouth. They just bragged about service uptimes, which made me very nervous as they have no safety net underneath if something goes wrong. Then, they tried to sell us crap which was not the title of either presentation. I've seen warmer oracle presentations which were more useful.

After seeing their presentations, I stopped using their services in all my present and future developments. What I discovered is that between basic caching, and postgres, I end up with better performance, a cleaner codebase, a cleaner architecture, and way faster development. I will never go back.

So, instead of Assh*le GPL they could license it shove it up your *ss GPL and I wouldn't care.

Comment It drastically reduces classroom size. (Score 1) 44

There are all kinds of interesting studies on classroom size. The bulk of the valid ones show a magical number 21. This is a cliff number; that is; the difference between 21 and 22 isn't much different than 21 and 40. But, as you get below 21 it goes up and up, until you hit one on one tutoring. Under around 3 it becomes arguable that 3 is better than 1, but arguable.

All that said, I have used chatgpt to learn things. The beauty is that you can drill down as far as you want. It can mark your work, it can suggest why you went wrong, it can complement, it can critique. If you are learning about something and you don't understand, it can explain what you find confusing.

That is chatgpt, which is an extremely general purpose tool. It is not following a curriculum, it isn't remembering where you struggled, etc. But, without a doubt, there are people building specific tools for education; tools which remember the student's progress, push along a curriculum, and are well versed in the subject being taught, and the best way to teach it.

Also, this is pretty much year 1 of chatgpt, where will AI driven education be in 10 years? I love these people finding edge cases of where it screws up or supposing kids will figure out ways to fool it. The simple reality is that most teachers are not very good, and are often working in fantastically broken systems. So, maybe this sort of tool will be a minor adjunct for the best teachers at the best schools; but for the vast majority of the world, it will put an enture fantastic education in the palm of your hand.

Other attacks will be that kids will ignore it, etc. But, what is different now? Plus, various incentives could be put in place to encourage kids and their families to follow through on an education. Also, there are those kids who are self-motivated; they will thrive when handed an opportunity to zoom ahead as fast as their brains will allow. I'm not only talking about the nerdlings, but the kids who want something; I want to go to space, I want to start a fashion business, I want to work on race cars; or whatever. They will be able to tell their AI school in a box to help them with this, and off they go. This isn't only going to be 1st world kids, but all over the planet. Kids in the worst of the worst of the worst places will be able to realize their maximum educational potential.

Again, on this last some people will say, "Oh, but hungry malnourished kids can't study as hard." Yes, but that is an argument for health programs, not an argument against AI.

In theory, the biggest losers will be mediocre and worse teachers, as I suspect the eventual evolution of many educational systems will be few great teachers overseeing large number of AI assistants. Some school systems will cling to the old ways, but various accepted performance measures will end up proving this to be bad. And again, people will try to dodge reality by saying, "Oh no, standardized tests are bad." Again, not a condemnation of AI.

If you look at all the people who will argue against this, they will make unsupported accusations, and focus on weird edge cases. For example, I suspect one big attack will be on socializing. I agree, this is a part of the educational system. But, that just means it becomes effectively a curriculum item; as opposed to today where it is left for kids to figure it out in the jungle school system. I wonder if these people who will fight AI would like to see a course where the strong kids get to bully the weak ones. Or they have a daily election for prom king and queen so the "peak in highschool" kids get their due?

I wonder if AI teachers and adminstration will give the football stars a pass both educationally and ethically? Yeah, I somehow doubt most people want their kids going to the same school system they went to, if there is a vastly better one made available.

Comment My "vintage" macbook pro still kicks ass (Score 2) 46

I have to use some windows software. There are many fields where windows is not optional. Engineering is most certainly one of them. Thus, I need to run windows in a VM on a mac, if I am going to use a mac. The arm versions of windows are aspirational, not functional. I would argue wine on intel linux is far more "real". While I appreciated the performance of my silicon and its battery life, the reality is that my "vintage" intel mac of no particular power, is very good, it performs well, and it has an acceptable battery life. I would go so far as to argue that few users of laptops need more than what my 2018 macbook pro delivers. The simple reality for me, though, is I also need nVidia. So, my primary machine is a lenovo with a solid nvidia card onboard. The mac is more what I now take to the beach. While I don't use a mini, I would suspect any mini of the same "vintage" with 16gb will meet most people's needs just fine.

Comment Dealers still will screw over customers. (Score 1) 58

Around 1999 I was doing some business with GM and was dealing with their top marketing guy. He had great hopes "this internet web thing" would allow GM to finally bypass the dealer network. He knew it would take many many years, but he so very much hated the dealers. He blamed many of GM's problems on them.

I see in these announcements that you are still effectively going through a dealer. These dealers will figure out a way to fuck over the customer. This is what they do. this is who they are. Fucking customers is their primary business. Selling cars is just a way to bring the customers to within their grasp.

Comment This is not about safety (Score 1) 42

The big AI companies are pushing for these rules, not because they give a shit about ethics and safety, but because they want small companies which are coming up with highly competitive AI to not be able to afford the gauntlet of regulations.

This level of reporting will no doubt be very expensive. But it will be a fixed cost that the majors can easily afford, but a startup won't. I suspect if your AI is somewhat innovative it will also run afoul of these regulations, and after being reported will generate very expensive investigations which will tie up and distract a startup.

Comment The big companies will do this to AI (Score 1) 261

The absolute last thing the big companies want is a bunch of us Jackasses doing our own LLMs. They will get the government to declare this dangerous, and without a doubt, the government will pile on the regulations which pretty much entirely restrict us to using APIs of major AI companies, and that will be it.

The only saving grace will be how hard it will be to define an AI. This will open up new avenues where people create things the rest of us will call an AI but the regulators won't.

Comment It sort of is, but for junior programmers. (Score 2) 99

I use Copiliot combined with ChatGPT as kind of a paired programmer.

Rarely does it do anything I couldn't do, and often doesn't even do as well as I could do. But it speeds my work right along doing the boring for loops, etc.

But where it really kicks some ass is in the super drudge work. Things like cooking up unit tests, and putting asserts everywhere. Making sure every conditional is caught, etc.

Some of these things are exactly what junior programmers are assigned, and where they learn. Paired programming is another huge opportunity for junior programmers to learn. Except I don't want to work with a junior programmer ever again. I can crap out unit tests, integration tests, and with these tools doing the drudge work, I can stay hyper-focused on the hard stuff.

Before this, a junior programmer could be helpful and occasionally had some cool trick or fact I could learn from. But now they are almost pure productivity-sapping distractions.

Another group of programmers are those rote learning algo fools. I can get these AI tools to give me just about any answer I need where it twists graph theory into knots. These people were mostly useless to begin with, but now are officially worse than useless.

And this is exactly where I see a big cadre of programmers getting shot. Junior programmers who will now largely go mentorless, and those rote learning algo turds who used to get jobs at FAANGS because some other rote learning fool convinced everyone that rote learning was good.

I asked ChatGPT what these people should do and it said, "They should go back to their spelling bees.... nerds."

Comment Worked in SCADA and this is the tip of the iceburg (Score 2) 23

I can't overstate how bad the security around most SCADA systems is.

A very common situation is the SCADA system is fantastically critical to what makes the company go; a factory, a refinery, a pipeline, a utility, etc. As we all know most IT departments have long ago lost the plot of why they exist, but in SCADA operations centers they often have kicked IT out. They run their own servers, they buy their own desktops for operators, everything. They might go back to their cubical and IT runs those computers, but often IT has little to absolutely nothing to do with SCADA including even provisioning networking as even this can be a bunch of weird.

This is a good thing in that there is no chance of the SCADA system going down and they have to get in line with the ticket system. The SCADA people will have their own experts who often deliberately live near the servers just so they can rush over in 12 minutes to solve any urgent problems.

But, it is also a bad thing because they aren't usually IT people. They are often some guy who programmed PLCs, then got into networking, and then was moved to the SCADA operations center. These guys often have a pile of knowledge covering a vast range of tech. A large distributed asset like a utility or pipeline could easily be 100+ years old and the equipment can pretty much cover the entire 10 decades of change. They may have paper tapes recording data in one place, modbus in another, MQTT in another, a bunch of proprietary communications protocols in another, acoustic modems, their own 1000km of fiber optics, satellite coms, some LTE, and on and on. In the server room there could be just about every OS in the last 40 years from VAX to a shiny new linux. The level of institutional knowledge one of these people typically has is insane. But what they often have no knowledge of is security. In this environment the very concept of regular upgrades scares the shit out of them. Often the software they use is super custom one off or low customer base software. Upgrades have a long habit of blowing things up. So, leaving a copy of windows NT 15 years behind is fine. Solaris 12 years since last upgrade is good. Nobody even blinks at a redhat install which hasn't seen an update in a few years. Why would you even think of upgrading the software on a PLC which controls something critical (as in blows up) if you don't have to.

Often they will have a few weird ass layers of VPNs and other crusty old security which they say is "bulletproof".

My theory is the reason these systems don't get hacked more is simply because most hackers don't know modbus, serial over UDP, are doing phone phreaking, or any of that. How many hackers know solaris? VAX?

Most of the industrial systems I have witnessed were the ultimate in security through obscurity; extreme obscurity. So this CODESYS thing is something that I laugh at. I don't know what product MS is trying to sell, but I can without hesitation say that the people in these larger industrial software companies aren't using CODESYS correctly anyway, and probably left a trail of SQL injection attacks (and other BS easy stuff) a mile long.

Like here is the level of stupid I can absolutely predict: If you look at the traffic going from almost any bit of their system to another bit there is a low chance it is being encrypted. If it is being encrypted they are doing it wrong, so the encryption is easy to break. Plus, basic security hygiene like ignoring repeated messages are probably not being done along with most message authentication. So, if you were to just repeat a message telling something to open a valve, it would probably just open the valve. But if you found some unencrypted messages and one of them read float for pressure which normal ranged around 100 and you set it to 100 trillion or something, their software would either happily ingest this new information and act accordingly (probably an alarm or a shutdown) or more probably, something would overflow and the software would crash. Or set values to 0 which never seem to go there and watch where they didn't do any divide by zero checking.

I know of one system where one of the parts of a communication structure says how long a following array will be. So it will allocate that much space. It is happy to try to allocate all the RAM on planet earth.

Comment Every day makes me wonder (Score 1) 85

It strikes me this isn't a very hard bit of science to replicate (or fail to replicate).

If I had to guess most university chemical engineering labs could cook this up in an afternoon.

So, if it is easy, then someone should have replicated it by now.

Or, it is really tricky to get right with insanely high purities, timing, etc. So, maybe people are failing, but reluctant to be the one to stand up and say, "Didn't work for me." not knowing if they failed to get it right, or it is bogus science.

But every day that goes by makes me wonder.

Fingers crossed it isn't BS though. If it isn't BS, then I hope people notch down respect for the ones who are saying this is "impossible'.

Comment nVidia is a bag of assholes when it comes to AI (Score 1) 18

nVidia even has some scare wording in their consumer grade GPUs that they "pose a fire risk" as compared to their datacenter GPUs.

If AMD wants to kick nVidia's ass, they need to do three things:
* Make a GPU roughly as good (it doesn't have to be better) than a 4090.
* Make a version of tensorflow that works with it for linux, windows, and mac.
* Give it gobs of memory. As in 24G or more.

This last is super important as the key feature of the nVidia high end cards is not their performance, but their memory size. Often the biggest performance gain in the better nVidia cards is from the larger memory, not the number of cores. If they went to 32gb, 64, or more, then they could crush nVidia like a bug.

I don't know if there is something inherently difficult, but if they could just put DDR4 slots for standard ram on the cards, then that would allow ML people to customize as needed.
For, as I said, often it is the GPU RAM which is the bottleneck which constrains what I can do. Even with the higher end cards the ram constraint is enough that I end up buying more cards just for that RAM. Going to multiple GPUs in a desktop or server is a right pain in the ass; both to physically install, but to configure. I've met well more than one ML person who thought they had multiple GPUs going only to find they only had one.

Slashdot Top Deals

God may be subtle, but he isn't plain mean. -- Albert Einstein

Working...
OSZAR »