1 2 3 4
wae
wae UltimaDork
3/29/25 11:13 p.m.

In reply to CrustyRedXpress :

For blockchain, yeah, you're spot on.  While that is a pretty interesting and valuable technology, there was way more hype and not enough use cases.  It's an enabling technology but not really transformational.  Because of its use in cryptocurrency, the excitement that surrounded those sort of rubbed off on blockchain and suddenly everyone needed to use it.

AI, however, is truly transformational.  Right now, we are horses of the early 1900s looking at these funny little self-propelled vehicles that are getting themselves stuck and breaking down all the time.  Yeah, that Model-T can do some stuff that the horse can't do, but the horse is still a better way to get around and the car is more of a toy.  Quick show of hands:  How many of us used some sort of hooved animal to get to work in the last day?  Week?  Month?  ...Century?

LLMs are just the start.  Well, maybe not the start.  Maybe the early-middle of the start?  You're not going to make the average worker's job easier be a large margin by just swizzling in a little ChatGTP or Copilot.  These benefits are more akin to not having to buy hay to feed the horse; it's not a make-or-break thing.  But as agentic AI becomes more developed, now we're talking about being able to have the computer do a job.  Not help with.  Do.

There's an email security platform out there called Abnormal Security.  I don't know that I'd class them as full agentic AI, but I think they're sort of in the beginning of the getting ready to begin the beginning stage of it.  At a high level, the product scans emails and tries to squash any sort of phishing or malware attack in real time using AI to scan patterns and understand how email normally flows and then looks for things that could be indicating an attack.  Other email security systems are out there, of course, and a very large enterprise might have 4 or 5 people who manage that.  Most of that work would entail manually checking flagged messages and responding to user requests to remove messages from quarantine.  Abnormal's sales pitch is that their AI is so much more effective than the filters that other products have that you can get rid of 3 or 4 of those people and the remaining employee will need to find other duties to augment the management of the system because it's not a full time job anymore. 

So, if you happen to be the worker who owns the budget for email security, your job just got much easier.  You're going to mitigate a bunch of risk and pay for it with headcount.  And this is the sort of thing that's going to happen more and more as this technology gets more and more mature.

NOHOME
NOHOME MegaDork
3/29/25 11:53 p.m.

Myself? Nope, don't see where it will apply if I can avoid it.

Whenever I think of the incoming AI Tsunami, I end up with this mental image from "American Gods"

 

 

 

z31maniac
z31maniac MegaDork
3/30/25 9:05 a.m.

I'm not reading all the BS, even where I work, we used to think that AI was bullE36 M3. That was 2 years ago. 

Now we actually know how to use it. I suspect many saying "it's overhyped" probably don't even know the difference between a naive and an advanced prompt. Or using persona's. What RAG is. How it can help with SEO. How many different LLMs there are. etc etc etc

Beer Baron 🍺
Beer Baron 🍺 MegaDork
3/30/25 10:09 a.m.

In reply to z31maniac :

Overhyped doesn't mean it's bad. It means it's being touted as having value that it actually doesn't. Either more usefal than it is, or useful in applications where it isn't. I think that's a pretty accurate description of how AI is being pitched to us.

My biggest problem with it is people who understand it the least pushing for its use in places where it doesn't belong. As pointed out earlier, I hate the generated AI summaries in search results because they are so often incomplete or wrong as to be effectively useless.

It's kind of like having an impact wrench. It's a great tool! Incredibly useful and can make maintenance tasks a lot faster and easier. Then a bean counter with an MBA sees how much time is saved when mechanics use impact wrenches, and tells them that they need to use an impact wrench in all of their jobs.

ShawnG
ShawnG MegaDork
3/30/25 10:38 a.m.

In reply to Keith Tanner :

Thank you.

z31maniac
z31maniac MegaDork
3/30/25 1:08 p.m.

In reply to Beer Baron 🍺 :

It's like I literally said in my post you have to know how to use it. 

It has a lot of value if you know how to use it. I literally don't care if people want to stay in the "get off my lawn" era. Complain about cars not having roll up windows. The rest of us will move on with technology. 

Pete. (l33t FS)
Pete. (l33t FS) MegaDork
3/30/25 1:39 p.m.

In reply to z31maniac :

That's not how it's being sold to the Great Unwashed Masses, though.

There's a reason why search engines have largely stopped using keywords and instead suggest actual questions.

californiamilleghia
californiamilleghia UberDork
3/30/25 2:18 p.m.

I am the  Great Unwashed Masses , I understand a little  , but many of your replies above are "Greek" to me ....

I think the  Great Unwashed Masses are scared of much of the clickbait that gets in the news , 

We will see what the App is that stops the  Great Unwashed Masses being scared and they want to use it because  its fun and maybe useful.

 

Beer Baron 🍺
Beer Baron 🍺 MegaDork
3/30/25 2:36 p.m.

In reply to z31maniac :

Right. Something can simultaneously have a lot of value AND be overhyped.

There's a brewing company that started up recently where their shtick is that all of their recipes are designed and written by AI. This is a terrible use. AI doesn't know if a beer is good or not. It can't taste a beer and decide it would be better with a different hop varietal.

z31maniac
z31maniac MegaDork
3/30/25 2:45 p.m.
californiamilleghia said:

I am the  Great Unwashed Masses , I understand a little  , but many of your replies above are "Greek" to me ....

I think the  Great Unwashed Masses are scared of much of the clickbait that gets in the news , 

We will see what the App is that stops the  Great Unwashed Masses being scared and they want to use it because  its fun and maybe useful.

 

There are a lot of free courses to help you learn how to use it. Even in my small group at work, we are seeing people push back against it, instead of embracing whats happening and learning new skills. 

You can be mad, don't like it, etc. That's how I was 6 months ago. Learning new things and being able to do my job better.

 

I don't know about y'all, I need that paycheck every 2 weeks, the health insurance for my better half. 

wae
wae UltimaDork
3/30/25 9:42 p.m.

I'm working on a project for which I am leveraging ChatGPT right now and found an interesting gap in its training.  Everything I know about PHP, SQL, Javascript, HTML, and CSS I've had to basically just teach myself so I have massive gaps in my knowledge.  I've been using ChatGPT to help me understand how to do things and explain things that I've found on the Internet that lack context.  It has been very helpful, but it just gave me one of those confidently wrong answers.

That's the TL;DR version.

I'm trying to filter some data that is coming in via POST to a PHP script so I asked it how I would add options to a filter called FILTER_VALIDATE_FLOAT.  I wanted to set a minimum value of 0 and a default value of 0 if the input failed to validate for any reason.  It gave me a great example of how to set the options, but it insisted that I could not use a min_range option with that specific filter.  When I told it that the manual on php.net indicated that the option was added to that filter in PHP 7.4.0, it told me that I was absolutely right and I could do that.  But it had this other bit of code that checked to see if the filter was successful.  Now, this would absolutely be important error handling if I wanted to kick back an error when given garbage input, but in this case, I'm happy to just set a default value of 0 and move on.  As far as I'm concerned, I asked you to input data in a specific format and if you berkeleyed that then you deserve whatever value I want to stick in the database.  But it included some lines of code to check if the result of that filter was false and if it is, to do some error handling.  When I asked it if there was any scenario in which that variable could possibly ever be false, it again told me that I was right and that since I was setting a default value, that code wasn't needed.

Neither one of those wrong answers would have caused any significant issue - the computational power required to perform the extra evaluations is so miniscule to almost not exist.  But it does add a bunch of useless code that clogs things up a little bit.

So, while I absolutely stand by my position that this is a technology that is transformative and can and will bring a lot of value to a lot of areas, it is certainly still an assistive technology that needs, at a bare minimum, someone to question its answers.

DarkMonohue
DarkMonohue UltraDork
3/30/25 10:57 p.m.
z31maniac said:

There are a lot of free courses to help you learn how to use it. Even in my small group at work, we are seeing people push back against it, instead of embracing whats happening and learning new skills. 

You can be mad, don't like it, etc. That's how I was 6 months ago. Learning new things and being able to do my job better.

I don't know about y'all, I need that paycheck every 2 weeks, the health insurance for my better half. 

Great Unwashed here.  My experience with most things that have come down from above as a sweeping change that everyone needs to "embrace" is that it's the sweeping change that ends up doing the embracing, prison shower style.

Every so often, some sort of change occurs that we're told is going to result in improved efficiencies and leveraged opportunities and lots of other fun selections from the LinkedIn Buzzword of the Week collection. Sometimes that means fifty employees each doing an extra hour of work a week in order to save an employee twenty hours a week. Sometimes it means committees get formed and enthusiastic participation is expected whether we know what exactly is happening or not.

Point being it's become harder and harder to get on board when some trend-chaser suggests that we "embrace" change, knowing full well that what they really mean is "accept without choice", and that the change we're being force-fed is a) not fully understood by those doing the pushing, and b) likely as not to be quietly unwound when it's found to be more headache than it's worth.

I gotta go. Do you validate parking?

Beer Baron 🍺
Beer Baron 🍺 MegaDork
3/31/25 8:19 a.m.
DarkMonohue said:

Point being it's become harder and harder to get on board when some trend-chaser suggests that we "embrace" change, knowing full well that what they really mean is "accept without choice", and that the change we're being force-fed is a) not fully understood by those doing the pushing, and b) likely as not to be quietly unwound when it's found to be more headache than it's worth.

I'd have a lot more faith if it were just being handed to a bunch of nerds who were then told, "If you can use this to do more work in less time, we'll give you a raise and PTO."

Beer Baron 🍺
Beer Baron 🍺 MegaDork
3/31/25 8:33 a.m.

I have a big fear with AI/LLMs that I'm not seeing addressed in the general discussion. I'm very afraid of having corporate management using it for short term efficiency gains at the expense of major long term knowledge destruction.

I think businesses will use it to replace low-level human work, but not high level knowledge. But then eradicate the next generation of people with high-level knowledge because no one got employed to do menial tasks to build their knowledge and skills.

So, my wife knows how to program in several different languages including COBOL. AI is not going to replace her. She has a deep understanding of these systems and how they work that AI can't really be trained to. She built that knowledge and skill base by doing a lot of hacking really basic code earlier in her career. Now, AI can replace the version of her from 15 or 20 years ago that just hacked simple code. She can use it as a tool to replace a pool of code monkeys. But with those code monkeys gone, there is not a new generation building deep understanding of these systems. Eventually she and others in her generation will retire and all of that knowledge will be lost. We'll be left with LLM's that can write a bunch of code really fast, but not people who understand it well enough to patch it into existing systems.

alfadriver
alfadriver MegaDork
3/31/25 8:54 a.m.
z31maniac said:
californiamilleghia said:

I am the  Great Unwashed Masses , I understand a little  , but many of your replies above are "Greek" to me ....

I think the  Great Unwashed Masses are scared of much of the clickbait that gets in the news , 

We will see what the App is that stops the  Great Unwashed Masses being scared and they want to use it because  its fun and maybe useful.

 

There are a lot of free courses to help you learn how to use it. Even in my small group at work, we are seeing people push back against it, instead of embracing whats happening and learning new skills. 

You can be mad, don't like it, etc. That's how I was 6 months ago. Learning new things and being able to do my job better.

 

I don't know about y'all, I need that paycheck every 2 weeks, the health insurance for my better half. 

But other than programming, how does it really help?  Seriously.  

In theory, I can see that it can help process data from tests that I used to do.  But I'm not all that sure that's super helpful- what it could lead to is brain drain thanks to fewer people needed to develop cars.  Honestly, the process to develop cars can't speed up much- it still will take time to make the prototypes, it still takes time to test them, and the data processing time is rather small compared to all of that.  So all it really helps is reducing the number of people you might need- and I don't see that as a good thing, as it means that people have less time to solve problems (since AI can't do that).  Spreading talent out isn't ideal when things are so varied.

If I'm so busy that I need AI to write e-mails for me, is that really a good thing?  If I use AI to generate a resume- that's certainly a bad sign for businesses.  

More efficient only works when the amount of work that you can additionally handle can be dealt with at a better standard than before.  If it's worse, then it's not better.

This whole thread is about how AI makes your life better.  So far, all I have seen where it really makes a big difference is in programming and then IF you have a huge amount of people inputting teaching data, then processing huge amounts of data that is odd can work- like ID'ing cancer or shapes of galaxies.  But other than that?

I don't see a machine writing my emails as a benefit, especially since we now have AI interpreting messages from others.  I don't see AI making pictures or avetars as anything special.  Especially not working anymore, I still have not seen how AI can make my life better at all so that I have to use whatever is on this laptop or get a new phone with it.

dculberson
dculberson MegaDork
3/31/25 9:04 a.m.

AI newbie here. I thought I'd share my experience using it for a productive thing. 
 

I had a want for a simple Firefox plugin that would make a specific use case more productive for me. I have never written a Firefox plugin and don't have much spare time so for six months or so the idea languished in the back of my head. One day I remembered my coding friend told me all the great stuff he uses ChatGPT for. I logged into ChatGPT and asked it to help me write a Firefox plugin. Initially it gave me a very thorough tutorial on writing one, then I clarified that I wanted it to write the plugin and what I wanted it to do. It spit out some code and very clear directions on how to test it. The code was even commented! I tried it and got an error message from Firefox and relayed that to ChatGPT. It told me it was due to a version incompatibility and gave me revised code to try. I tried it and it worked 100% for what I wanted. I then tested it for a while and it was perfect so I asked ChatGPT how to package it so it could be installed rather than running in debug mode. It gave me the steps which I followed and it's installed and running on my Firefox instances and is super handy, saving me several minutes of time multiple times per week. 
 

Now, my use case is specific enough that nobody's likely to write this plugin but me. And it wasn't compelling enough for me to go through learning how to write it from scratch. Once it was done I could read the code and understand it so had little in the way of worries about security. And I went from never having written a plugin to a fully functional example doing exactly what I wanted in about 30 minutes. It was pretty amazing and sobering. Anything elaborate would be a different story I'm sure but it was really fun and useful for me in this instance. 

wae
wae UltimaDork
3/31/25 9:51 a.m.

Beyond a doubt, there are some indications right now that AI is having a troubling effect on critical thinking ability.  The fear that if AI does the easy stuff no one will be able to learn to do the hard stuff is not unfounded.  That is precisely why I have been using it in my project to help me learn things and only once did I ask it to actually produce code that I would use.  There have been concerns for almost 20 years about how the ability to get to answers to factual questions via Google has been impacting our ability to remember things.  Studies now are starting to suggest that, since AI allows us to ask computers about logic-type questions, it may have a deleterious impact on our own critical thinking ability.  Search allowed us to stop trying to remember things and AI is allowing us to stop trying to figure stuff out.  All those things that our grade school teachers taught us about how using spellcheck and calculators would make us dependent on those technologies and that we wouldn't be able to rely on them in the future is almost completely true.  Granted, we do all walk around with a calculator in our pocket now and almost everything we write goes through a spellcheck process, but -- and this is my own personal crackpot theory for which I have no evidence -- as we've seen with what writing cursive does to the brain, not doing those tasks in our head probably has a negative effect on cognition.  Being able to outsource all of that could come with a really high price and could be the way that we get to full Idiocracy. 

There are other concerns as well, especially as we start to talk about general AI and agentic AI.  There's the Paperclip Maximizer hypothetical about giving the computer an instruction to optimize a factory for making paperclips and the computer finally winds up taking over the country, for example.  I heard another expert talking about AGI in terms of humans getting themselves into a position where they would need to negotiate with an opponent that could think so fast it would be as though they would have 39 years to sit down and think about how to respond to you.  In the here and now we already have problems with the LLMs in terms of hallucinations and being just wrong quite often -- if I remember the article correctly, someone tested the factual accuracy of all the LLMs that are out there now and found that they were all wrong most of the time about a specific set of facts that were used for testing and Grok was the winner of the Village Idiot award with a 94% wrong rate.  There are legal implications that haven't been worked through - who owns the rights to the output and what restrictions or limitations are or can be set on existing works on which the AI could be trained.  Hell, we created it but understand it so poorly that there are research studies and experiments going on to try to understand exactly how the damned things work!  How bizarre is that?  We created something out of whole cloth but don't actually fully understand how it works!

That said, I go back to my Model T analogy.  Yeah, the automobile looked like a fad or a toy when it was new, but even with all that it had enough advantages that we kept developing it and adapting our lives to it until it replaced horses for 99.44% of all transportation needs in modern society.  Computer technology has had a similar arc.  Email and the Internet were mere toys that were good for little more than arranging where to go to lunch in 1993.  Instant messaging was for hardcore geeks in the form of IRC and ICU.  I can remember Scott McNealy joking that unemployed people did this -- and pretended to hold a phone sideways while tapping on the screen -- and businesspeople did this -- and held up his Blackberry while thumbing on the keyboard.  His implication -- and the sense at the time -- was that the iPhone was just a toy with its "apps" and all that garbage, but serious people needed a keyboard and a platform that just did mobile email.  That was probably in 2010.  How many units did RIM ship last year?  The point is simply that you can scoff at how immature the tech is -- because it is! -- but ignore it at your peril.

You want to know what AI can do right now to make your life easier?  First of all, it probably already has.  Since it's a fairly general technology, you're already using it in a bunch of different ways that you're not even aware of.  But like dculberson's example, it's a matter of coming up with what it is that would make your life easier and then applying it.  You don't want to write a resume or code?  Fine, then AI isn't going to make your life easier by helping you write code or a resume.  So describe something that would make your life better in the way that a technology can.  I'd bet that there's some way to leverage AI to help with that goal.  It might not be the end-all-be-all right now, but it can move the needle.

And, by the way, the needing fewer people to develop cars is a feature, not a bug.  Why would I want to employ a whole bunch of people to sit around and interpret emissions regulations and then try to figure this out when I can pump the whole thing into a roomful of GPUs and have it understand the assignment, look at volumes of data that no human could ever process, and start giving me answers to the questions I didn't even know to ask?  All before I finished my first cup of coffee.

alfadriver
alfadriver MegaDork
3/31/25 10:19 a.m.

In reply to wae :

Having seen the impact of cutting more people at work, it's not a good thing.  Again, less people is good if it results in the same or better output.  It's not if the output is worse.  A person has to be able to understand the output and deal with it, I'm not sure that AI is capable of doing that, given that it takes a lot of learning to do that really well- which means more tests and more human reviews of the tests.  AI is good at taking billions of tests and filtering through them all quickly, sure.  Is it better when you are taking 1000's of channels of data, most of which are just there "just in case" to know what to change when it doesn't really understand the physics?

Interpreting the results faster does not speed up the process of testing cars- the nominal test takes an hour, and then you have to wait at least 12 hours until the vehicle can be tested again.  The only way to speed the process up is really to test more cars, which is the opposite of doing it cheaper.  Or on a dyno- the time it takes to map an engine is more about how many degrees of freedom the engine has, AI can't speed that process up- the data processing is measured in days compared to months of testing.

When cars were invented, it did something better than existing technology- it replaced horses.  So I could go into town or to the next town or the next state much quicker and easier since I could feed the car easier than the horse.  What is AI really going to replace in my life like that?  That's the question I'm trying to understand.  

codrus (Forum Supporter)
codrus (Forum Supporter) UltimaDork
3/31/25 10:22 a.m.

Commercially speaking, "AI" is today where the Internet was in the late 90s.  The business world believes that there's something tremendously valuable there, but the actual money-making aspects of it haven't really become apparent yet.  As a result you see lots of companies experimenting in lots of different directions, many of which seems trivial and stupid.  A lot of these efforts will fail, but some will prove useful and become as important to life 30 years from now and as the Internet is to life is today.

 

 

pinchvalve (Forum Supporter)
pinchvalve (Forum Supporter) MegaDork
3/31/25 10:52 a.m.
alfadriver said:

I kind of call those lazy helpers.  

The other thing it supposedly does is better searches, with the top lines of a search.  But one scroll down shows the same information in the first few links.  So it's really no better.

IMHO, you are a bit off on both points. As a marketer, AI is a big part of my every day life now. For writing, I am a department of one currently. AI writing assist allows me to generate more ideas, more drafts, more outlines than I can possibly do alone, and it allows me to review and refine way more than I could without it. I have trained my AI to understand the voice and tone of my business, and it now understands scientific concepts and terms that normal spellcheck does not. It's not lazy, its a tool that increases productivity. 

As for searches, that is evolving rapidly. SEO was a huge deal for us marketers for years, but now its all about Answer Engine Optimization (AEO) and Generative Engine Optimization (GEO). Search engines are increasingly prioritizing AI answers, so people are increasingly not using links - they are just trusting the answers provided, so marketers are increasingly focused on AI answers and not links. Like it or not, AI answers will replace search results someday. The only thing holding it back is that Google has not monetized AI results yet, but you know that will happen just as it did with search. 

wae
wae UltimaDork
3/31/25 10:57 a.m.

In reply to alfadriver :

All of that is true.  Unless you can replace the dyno testing entirely with an AGI that has very specific training and access to massive amounts of GPUs.  We're talking about being able to do the process what CFD did to wind tunnels.  Yes, we still need wind tunnels (for now), but how much cost has been removed from the process and how much time have we been able to take out.  One of the features of AGI is that the neural networks will be able to teach themselves at rate that is more akin to "Tank, I need a pilot program for a B-212 helicopter" than years of school and work experience.  ChatGPT can't do that right now, of course, but LLMs are just the beginnings.

When cars hit the market, they weren't better than existing technology on day one.  It took a decade or so before there were more cars being used than horses and a little bit after that to make the horse utterly obsolete.

I hate to say this, but the thing in your life that AI is going to replace is...  you.  The same way that the diesel-electric locomotive replaced the fireman.  The rail company might be contractually required to keep you on the locomotive for a while, but you're pretty much just sitting there taking up space.  And to be clear, I don't mean you, specifically.  I mean people.  The same way that factory automation came for blue-collar folks, AGI is coming for the white-collar jobs.

bmw88rider
bmw88rider PowerDork
3/31/25 11:07 a.m.

In reply to codrus (Forum Supporter) :

We as an org are really starting to unlock the value. It took almost 2 years and we are a Fintech company. We have been doing that trial and error and found some great use cases in software development assistance. We have been using it as a starter package. It's a 60-70% solution at this point. There is still that 30-40% that is touched by human. For those that are familiar with Agile, We use it to start for our Gherkins scripting and our use case development. 

It's been something as a leader in our product development group, I'm driving to incorporate. I am not getting new staff any time soon and a long list of deliverables so I need to get the most that I can and it truly is helping.

alfadriver
alfadriver MegaDork
3/31/25 11:46 a.m.

In reply to wae :

Until Navier Stokes is fully solved then computers won't replace dyno testing.  I've seen plenty of really high end flow models not work to know it's going to be some time before dyno testing can be replaced.  Let alone emissions testing, or driving the cars on real roads to know that people will interact with the car in a positive manner.  Heck, structural modelling is really, really good, but cars are still crashed, and then those results are fed back into the model.  Oh, and a good one that has gotten people here- CAD fully did a recent Ford engine, but it was still wrong- the bores were cracking due to high pressure.

And given that there's as much art is engineering as there is science, I honestly don't see AI replacing engineers in my lifetime.  

Data processors?  Maybe.  Robocallers, sure.  Ship captains?  No.  House builders?  No.  Snow plow drivers? probably not.  Room cleaners- no.  Doctors?  No.  It might make them more effective, but it's not going to replace them.  And there will be a limit to how efficient a person can be.

I get the hype to make people think that, but I just can't see it happening.  Extrapolating being able to look up data on line differently, writing e-mails, and doing auto coding from an even higher level than it already does all the way to designing and developing real hands on products?  No chance.  

Let alone how will AI make the experience of retirement better.   When computers are 90% entertainment, 10% helping on travel, learning how to do something- this is why I started this thread wondering what AI will do for nominal consumers.

Toyman!
Toyman! MegaDork
3/31/25 12:39 p.m.

AI doesn't help me. 

From my standpoint, it's more of a novelty than a usable tool.

When it can clean my house and mow my lawn without me having to hold its hand or bail it out of trouble it can't figure out, it may be worth something. 

 

Snowdoggie (Forum Supporter)
Snowdoggie (Forum Supporter) UltraDork
3/31/25 1:43 p.m.

I'm using it now at work for summarizing documents and other simple tasks.

You can't use it in the legal fields to write interrogatories or complaints. AI doesn't understand strategy. Sometimes AI makes up laws that don't exist because in some cases the law doesn't make sense because laws are written by legislators influenced by lobbyists for very small interest groups to the detriment of the general population. Several guys back East got disbarred for using AI to write their pleadings without reviewing them to see how far the computer got off track.

For the most part, its just another tool in my toolbox.

I remember when the Dot Com explosion was in full force. Pets.com wanted me to order 40 pound bags of dog food online. I bought my dog food at the PetSmart down the street. I still do. But I also buy a whole lot of crap on Amazon now.

1 2 3 4

You'll need to log in to post.

Our Preferred Partners
tCreviIvkSXu0Hxp1EFRrj3XGBuKL1HEQv7a6ol7WCpyAIkhUyeH6T1dtu7y9Ncp