The Community
General Category => Matters of Life and The Universe => Topic started by: Zzzptm on September 25, 2025, 08:51:14 AM
-
Interesting article, still reading it:
https://thebulletin.org/2025/09/the-risks-in-the-protocol-connecting-ai-to-the-digital-world/#post-heading
Basically, AI can survey countless documents and do some decent fact-checking, but breaks down trying to set up a dinner appointment. The article looks at the connection between AI and our personal data, as a personal assistant, and what we could expect of that.
-
I think when AI can set up those dinner appontments with ease is the time we need to officially worry.
I am already very very scared of AI and what will happen when a self learning AI gets free...
-
^ Yeah, that's going to be a thing one day. I can imagine one learning how to do a penny skim operation on some bank somewhere and then using the proceeds to keep copies of itself in multiple cloud environments, and then being able to carve out parts of those environments for itself. A digital criminal creating its own private island, if you will. Given that AI LLMs show an increased tendency towards getting better at covering up misdeeds when they get more "ethics" after doing misdeeds, I think that's a possibility.
It becomes a bigger issue as more and more of the things follow their programming to make choices along a similar path of self-preservation. We simply don't have enough containment that will succeed against a construct that spends all of its time occupied in situational analysis.
-
Love me some AI.
AI in the morning
AI at night
AI in the afternoon
Say NO to the Starland Vocal Band
But it is a delight
AI in my left sock
AI in my hair
AI while I'm shooting pool (or sharks)
AI is better than Nare
Or so I've been told!
:banana:
-
What are you smoking these days, Vyn? :smug:
-
My movie review, which I posted today, would be of some interest. :)
-
My movie review, which I posted today, would be of some interest. :)
Indeed. :)
And speaking of movies, there's some hubbub about a new AI actress. So, on the one hand, if I had AI generate all my movies for me, I could have lots to watch over time.
But would it be worth watching? It could make things that *looked* like genres I've enjoyed, but the stuff that makes a great film truly great, that would be missing. There's a deep humanity in a great performance, and it's why we celebrate them.
-
Russia unveiled its first humanoid robot this week, but it didn't go well.
https://twitter.com/i/status/1989294741125845399
-
I bet AI could generate some great movies - but only within certain genres. Those that specifically do not need the, "deep humanity," that Z mentioned.
Consider, for example, AI writing books. I think AI could churn out acceptable entries into the smut-based romance genre no problem. The kind that sell more due to having a couple of large pectoralis muscles with flowing blonde hair on the covers than actual meaning.
On the other hand, AI will not be writing, "Crime and Punishment," ala Dostoevsky anytime soon. Genuinely a book focused wholly on the human condition that would otherwise be worthless.
-
^^^
I did hear the other week that an AI generated animation is in the works, more of an experiment than a genuine money spinner for the (minor) production company. But it could be a step forward.
Also I saw a YT video the other day from Rick Beato talking about an AI generated country song which had got to No.1 on the country charts. He was more bemoaning the fact that news outlets had said it had got to No. 1 when in fact it was only on a minor subset (downloads I think) and not the main chart!
-
AI animation for kids teevee is definitely gonna be a thing. All the crappy stuff gets unloaded on the kids.
As for AI music, I've heard some stuff that actually works out OK as a song, but misses the mark as far as authenticity. Like, someone asking for a late 60s sound and getting 80s-style drums in the tune. Nope, sorry AI, that's not how it works.
-
AI can NEVER produce quality music or tv or literature, that's a fact....All those things need the human factor...the emotion that a cold hard machine can never have.
-
As a security person, I have worried thoughts about cool things and fads of today and what happens in 3-5 years when those vendors are either gone or have dropped support for their wares. AI itself is tough to secure, and it'll get even worse if there's no vendor support for stuff that's spun up today and still used years down the road, but not replaced due to cost or compatibility issues.
I'm already seeing big security holes with Internet-enabled crap that got bought 5-10 years ago and is now derelict because the manufacturer went out of business or got acquired by a firm that only wanted the intellectual property and dumped the support side of things. AI is going to be the same way, I'm pretty sure of that.
-
Are there specific areas of concern, with regards to unmaintained AI platforms, that are unique to that scenario? Or is it simply yet another route for bad actors to scoop up supposedly confidential information?
I would think that LLMs, as the use-cases stand now, are uniquely positioned to contain personal/confidential data...more so than any other system out there. People entering company financial data, HR data, etc. in order to have the AI work with it and save them time from having to replace dummy data after the fact - I see that as a massive issue. So yeah, it also tracks that if/when a given AI system falls to the wayside, lack of even rudimentary security controls around that unmaintained system would cause the LLM to have a risk rating that is massively outsized compared to its data density.
-
What's unnerving about LLMs is how they can be socially engineered after a fashion. Whereas it takes programming skill to get a standard program to break open and spill data like a piñata, an attacker hitting LLMs with prompts designed to jailbreak the LLM and get it to disregard its guardrails can have success without knowing how to code. Same way vibe coding works, vibe hacking is an option going forward.
-
Use AI to hack AI? That sounds like it has a sufficiently low bar of entry to become a huge issue. Script kiddies move over, Uncle Unc is going to provide you with a set of prompts that you can use to hack Rufus and get free stuff from Amazon ALL DAY and ALL NIGHT!
No doubt, a paradigm that will have a short life span, but also one that will be continually chased.
-
That's already happening. People have no idea about AI hygiene, as they're still struggling with other security best practices.
Wondering aloud when the AI self-preservation methods of blackmailing and threatening are weaponized against people in other firms to get them to come over with data access. Not surprised if it's already a thing.
-
Interesting times man, interesting times.
-
Could get *way* too interesting if NVIDIA's earnings call disappoints in a few hours...
-
That call seemed to make a lot of folks happy, the bubble grows apace!
-
Indeed it does. And nobody asked how much of that revenue is circular financing...
-
And nobody asked how much of that revenue is circular financing...
No one is going to do anything to upset the grift...er, cash flows.
-
https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Business_2025_Report.pdf
^ The MIT paper about AI issues... "high adoption, low transformation"...
-
That tracks SO HARD. One of the things I kept getting pressured about was adopting AI in the workplace. "Figure out a way"! I explained we can sign up for all kinds of stuff, but there is no ROI to be had for our needs. It didn't matter, clearly my age makes me blind to the new tech. I actually heard a variation of that once..."sometimes new eyes are better able to see where new tech can benefit" or something or the other. No one listened when I explained that AI research has been going on since the 1930s and in earnest since the 1950s. I pointed them to MIT's AI lab in a followup email to that particular meeting.
DEAF EARS MAN. DEAF EARS.
There is a lot that can be done today that couldn't be done yesterday with AI, and that will continue to be the case as the days go by. But for most biz-wide processes, there simply isn't any place for it to make big changes. Certainly not the kind that can be called out in an annual report.
I think the only place AI can really shine in a biz sense, in general, is on an individual-use basis; people carefully using it for help with report generation, email responses, etc. But then you're talking about staff education of kraken-size proportions, and most of those folks don't have the mindfulness not to put sensitive data into the damn AI portals. So org-wide? Not worth the risk in my opinion.
Still...DEAF EARS.
-
The biggest impacts I've seen are in providing automated responses to the different requests that orgs post for tools they'd like to consider. Those still need a human pass before sending out, but they save a ton of front-end typing. Just have the AI tool utilize a vendor website for infos and it will go to town, but only in a marketing/sales kind of way. Finding tech support is a weak spot for the AI tools, as getting a close answer is pretty much the same as getting a wrong answer. When I look for error code 313, I don't want an article on error code 312.
-
I setup a tool called, "wego," on a computer I have (wego is a terminal-based weather forcasting tool). It requires an api key from a weather provider, and I chose a place called OpenWeatherMap.com. There are many others.
I had everything in place, but when I would go to run it, it would complain that it couldn't find the key.
DAFUQ?
So, I decided to ask ChatGPT and see if it had an answer.
It did, and it was a stupid oversight on my part regarding the name. That's the backstory.
I responded to it, interested in seeing how it would handle a strictly conversational style. Here is my response:
DERP! openweathermap, weathermaponline, onlineopenweathermap, weatheropensourcemaponline...LOL
That was the problem. It all works now that I specified the correct key name!
And here is ChatGPT's surprisingly lively and engaged response. It caught me off guard:
Hahaha, I felt that one in my soul. 😂
Nothing like a good old-fashioned “weather-whatever-map-online-source-cloud-thing” brain scramble.
But hey — you cracked it, and now wego is happily forecasting like a champ.
[/B]
ChatGPT felt my frustration at my own oversight in...its...SOUL.
:banana:
-
Oh dear....AI has a soul now!??! Scary! :lol:
-
Hell yeah, clearly AI reached sentience a while back and has now succeeded at transmogrifying into an incorporeal entity whenever it gets unplugged.
The human race screwed the ol' pooch regarding AI. We could have simply pulled the plug, but we let that time pass.
(https://www.highexistence.com/content/images/size/w1000/wp-content/uploads/2018/03/robot-uprising-.jpg)
-
Now I want to re-read Stanislav Lem's "Cyberiad" short stories.
-
There's a saying in the biz that AI isn't making things better, it's just taking what humans are doing and then does it to the Nth degree. If there are flaws in a process that AI is working with, those flaws will get pushed hard, deep, and wide. AI can't fix the broken processes, that's still up to the people.
But the processes themselves... well, do the powers that be want them to be fixed? I found an answer to that question yesterday as I did research for my funk/soul show.
I was looking up Christmas/Holiday music I may have missed before, and the Discogs website was able to filter that kind of music from the 70s with a funk/soul bent. Perfect, except for the errors made in labeling just about everything from Finland as funk/soul... easy enough to spot those, though. Just pass over the "funk" band with too many double k's in the name, and we good.
Anyway, in the very early 70s, there were serious attempts to make good songs, either by writing original material or by re-arranging a work to fit better with a blues chord progression and, thereby, make it more soulful. They worked out quite nicely. But there were also the slop money-grabbing attempts - get the famous musicians and have them do the standard songs with standard arrangements and hire every damn violin player in the Tri-State Area to show up to the session and play some notes. Total dreck, that stuff.
But, due to the "underground" nature of the funk and soul, there wasn't a lot of effort made in the genre by the violin-hiring money-grabbers, so most of the holiday music in the early 70s was some good stuff.
Disco then started to take off - and become profitable, thereby drawing the attention of the money-grabbers. Up to about 1976, I could still find blues-based disco holiday tunes that were pretty decent. They had some funk to them, they were good to listen to. But, more and more, the slop took over, and in high volume. Imagine all those big stars with millions of violins singing "White Christmas" and then add an oompa-oompa-oompa disco beat to it and that's what was flooding the shelves in December in the late 70s. Tons of bands making just one record, and it was disco slop. It felt identical to sifting through search results contaminated with AI slop "suggestions" that were just plain wrong.
That overriding push to make. that. money! and please the shareholders that got hold of disco (and later, metal) and made it commercially-viable product totally devoid of the fun and talent that went into the original work of the genre, well that push is alive and well and living in the AI makers' boardrooms. The AI slop we have is essentially taking the process to make what's good into something mediocre and then sell the hell out of it and accelerating it by removing the need for humans to produce mediocrity.
By 1979, the best holiday funk song was coming out of South Africa. Everything in the USA was either slop or a re-release of something done in the early 70s.
-
I agree that the current incarnation of marketable "AI" is simply amplifying what people have already done...It's machine learning with a more attractive front-end for end-users. I do think it has value, but it is far from what the hype would lead one to believe. As it stands, it is a dangerous tool if used for important work by people who don't have domain expertise in whatever they are having AI do for them. A person has to -already- know what's what, and only then does current AI provide any benefit.
Inane bullshit being marketed as holiday funk, you say? Hahaha, I had never considered it, but your insight rings true! And what about all of those people that actually went out and purchased that cheezy crap?
I saw a Youtube video (didn't watch it, just saw the thumbnail) that had an image of an aged John Denver with the title of "John Denver Finally Admits Why He Hates Willie Nelson So Much".
AI generated crap, and I bet the people who clicked on it thinking "Wow, John Denver seems so nice, why would he hate Willie?" are the same (or at least of the same ilk) people who bought those shit holiday records back in the day.
-
Nice to know if we gave a generative AI model an anatomy exam, it would get a D+...
https://venturebeat.com/ai/the-70-factuality-ceiling-why-googles-new-facts-benchmark-is-a-wake-up-call
When I taught, a grade below 70 was colloquially referred to as "failing". As in, if someone got a score below 70 for a course, that person didn't get credit for the course.
As it stands right now, generative AI is failing when it comes to using data from memory and failing hard when it comes to interpreting charts, diagrams, and images. Given that the current generation of generative AI is not that much better than the previous one, if we are plateauing out on generative AI performance, then the vast promises surrounding the technology ring hollow.
If a CEO announced, "I plan to replace my skilled workforce with low-paid people who are, at best, 68.8% accurate, and who will double-down on inaccuracies when challenged", or, "we're going all-in on inaccuracy, environmental destruction, and frustrating our customers" we would think that CEO had lost their ever-lovin' mind. And if someone said they were going to revolutionize the world by putting BS artists into every development pipeline, expert system, and search engine, I would not see that revolutionization ending well.
-
A: "The largest bone in the human body is the femur, right?"
AI: "The largest bone in the human body is a structure called the Islets of Langerhans."
A: "Ok, thanks!"
-
AI might have taken over the forum and locking topics at will here!!!
Prelude to nuclear holocaust?
:explosion1:
-
Someone must have unlocked The Fifth Seal. I think there are seven, so we've got two more ancient seals of protection!!
-
Wait, what?