(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=40862865

一位作者尝试使用人工智能撰写引人入胜的个性化电子邮件,以吸引注意力并促进互动。 尽管最初面临确保真实性和避免大规模生产等挑战,但人工智能生成的消息并没有被收件人注意到。 作者利用复杂的语言模型多重(LLM)链来实现这一点。 虽然一些人批评关注引起关注和参与,但另一些人则认为,包括人工智能在内的技术进步对各个行业和我们的日常生活做出了重大贡献。 从经济学上讲,通过资金流动来衡量成功往往会忽视无形的收益。 像法学硕士这样的高效技术可以使多方受益,并实现超越以前可实现的进步。 然而,在考虑大众传播(尤其是营销)以及对隐私和信息完整性的潜在影响时,就会出现道德问题。

相关文章

原文


The author links to the somewhat dystopian blog where the email sender is quite proud of their work. Their words (or perhaps that of an LLM):

> Could an AI agent craft compelling emails that would capture people's attention and drive engagement, all while maintaining a level of personalization that feels human? I decided to find out.

> The real hurdle was ensuring the emails seemed genuinely personalized and not spammy. I knew that if recipients detected even a whiff of a generic, mass-produced message, they'd tune out immediately.

> Incredibly, not a single recipient seemed to detect that the emails were AI-generated.

https://www.wisp.blog/blog/how-i-use-ai-agents-to-send-1000-...

The technical part surprised me: they string together multiple LLMs which do all the work. It's a shame the author's passions are directed towards AI slop-email spam, all for capturing attention and driving engagement.

How much of our societal progress and collective thought and innovation has gone to capturing attention and driving up engagement, I wonder.



I remember seeing a talk from Jonathan Blow where he made a comparison: in the 1960s top engineers worked for NASA and put a man on the moon in a decade, basically doing computations by hand. Today, we have super advanced computers and tech companies enjoy 100× times more of the top engineers than NASA ever had, and they are all working toward making you click on ads more.



Just wait. Enough of us will get pissed off that we will develop AI agents that sit between us and the internet.

A sufficiently advanced personal assistant AI would use multimodal capabilities to classify spam in all of its forms:

- Marketing emails

- YouTube sponsorship clips

- Banner ads

- Google search ads

- Actual human salespeople

- ...

It would identify and remove all instances of this from our daily lives.

Furthermore, we could probably use it to remove most of the worst parts of the internet too:

- Clickbait

- Trolling

- Rage content

I'm actually really looking forward to this. As long as we can get this agent into all of the panes of glass (Google will fight to prevent this), we will win. We just need it to sit between us and everything else.



> Enough of us will get pissed off that we will develop AI agents that sit between us and the internet.

Until _that_ company gets overrun by MBAs who are profit-driven then they start injecting ads into the results.

It will come in the vein of "we are personalizing the output and improving responses by linking you with vendors that will solve your problems".



> Until _that_ company gets overrun by MBAs who are profit-driven then they start injecting ads into the results.

Found companies with people that share your values. Hire people that share your values. Reject the vampires. Build things for people.



Unfortunately it turns out that at the end of the day one of the most common values is the love of massive piles of money. Vampires don't catch on fire in sunlight like storybook villains, they will invite themselves in, sidle up beside you, and be your best friend. Then in the moment you are weak they will plunge their fangs in.

Competing with bad actors is very, very hard. They will be fat with investor money, they will give their services away, and commonly they are not afraid to do things like DDOS to raise your costs of operations.



Someone has to pay off the $1 Trillion per year in Interest on the U.S. Federal Debt. Who’s that going to be? Either it’s them or it’s you. At least your grandparents got to live a nice life.



This was present in the book Fall;, or, Dodge in Hell. (Published in 2019; takes place in the near future) Everyone had a personal AI assistant as you describe to curate the internet. A big part of the motivation was to filter the spam. A secondary affect was that the internet was even further divided into echo chambers.



I get what you are saying but what is the end result when someone is so shielded from the outside when they decide to block everything that irks them and stuck in an echo chamber?

What if the user is a conservative voter and considers anything counterpoint to their world view the worst part of the internet and removes all instances of it from their daily lives? Not to say that isn’t already happening but they are consciously making the choice, not some AI bot. I can see something like this making the country even more polarized.



Same as it ever was.

Growing up as a southern evangelical before the internet, I can promise you that there has never been a modern world without filter bubbles.

The concept of "fake news" is not new, either. There has been general distrust of opposing ideas and institutions for as long as I've been alive.

And there's an entire publishing and media ecosystem for every single ideology you can imagine: 700 Club, Abeka, etc. Again, this all predates the internet. It's not going anywhere.

The danger isn't strictly censorship or filter bubbles. It's not having a choice or control over your own destiny. These decisions need to be first class and conscious.

Also, a sure fire way to rile up the "other team" is to say you're going to soften, limit, or block their world view. The brain has so many defenses against this. It's not the way to change minds.

If you want to win people over, you have to do the hard, almost individual work, of respecting them and sharing how you feel. That's a hard, uphill battle because you're attempting to create a new slope in a steep gradient to get them to see your perspective. Angering, making fun, or disrespecting is just flying headfirst into that mountain. It might make you feel good, but it undoes any progress anyone else has made.



Correct me if I'm wrong, but isn't the "government debt" the sum total of currency issued, rather than being like the balance on a credit card? It's better thought of as a measure of the size of the economy being governed. What you want to keep an eye on is the total inflation-adjusted 'value' of the economy, if this starts reducing then that's not good.



Google has 5 times as many employees as NASA, SpaceX, ULA, Rocket Lab and Aerojet Rocketdyne have combined. Which is actually a lot closer than I would have expected. But still, just Alphabet is a lot bigger than the entire US space industry. Adding Fusion probably doesn't change the numbers much.



It’s not the fact they have 5 times the employees that surprises me, it’s how little they accomplish.

SpaceX is launching multiple rocket ships into orbit every week. Google is.. releasing webpage CSS tweaks like “New Google Sign In Page” and a couple second rate AI products no one asked for when they get caught with their pants down.



well the biggest tech companies with 100x's the computing power are. I'm sure if the collective FAANG all focused their funds and hardware on getting to Mars we'd see the seeds of terraforming in our lifetime.



Someone decided that marketing is now a tech problem. Artists have been replaced by software engineers. The net result is creepy AI emails.

I fell for oldschool marketing yesterday. Im moving into a new appartment in a couple months. The local ISP who runs fiber in my new building cold-called me. I agreed over the phone to setup the service. That was proper target marketing. The person who called me knew the situation and identified me as a very likely customer with a need for service (the building has a relationship with the ISP). I would never have responded to an email or any wiff of AI chatbot. They only made the sale because of expensive human effort.



There was no tech here. My new landlord contacted the local ISP, the one they liked to work with, to say they had a new tenant arriving soon. I'd bet that my connection will have been setup long before I arrive, at a time convenient to the landlord and local provider. A landlord recommending a favored local vendor to a tenant, or a tenant to a vendor, is the sort of human relationship that predates electricity.



cold calling isn't an art, but smooth talking/networking is. There's no exact science to making people feel good and wanting to form a relationship with you (despite centuries of literature claiming that there is).



Programming and just about every other job is an art as well with that argument. If we aren't allowed to automate away that then we aren't allowed to automate anything.



It'll all vary based on what and who is automated. I'm sure there'd be less(but non-zero) fuss if we were trying to automate plumbing. I'm sure there'd be entire riots over trying to automate professional sports leagues.

I'd say the art industry is somewhere in-between because of

1. Being a traditionally disrespected but non-trivial skill to acquire 2. A skill valuable for advertisement (good art -> pretty ads -> more money 3. A valuable skill, but not one many industries need full time work from 4. Due to #1, a "vulnerable" industry. There won't be too many millionaire artists to fight back against the AI Overlords compared to, say, Politicians or businessesmen.

But it's not like I have any say on who or what gets affected.



Based on the response upstream, I assume they were talking about the latter. There's no art to door to door sales reading a boilerplate. There is an art to researching a customer and curating a proper response to make them feel good.



And yet someone is building all those super advanced computers and AI models. Someone is launching reusable rockets into space. Someone is building mRNA vaccines and F1 cars and humanoid robots and more efficient solar panels.

The "smart people are all working in advertising" trope is idiotic. Just an excuse for people to justify their own laziness. There is an infinite number of opportunities out there to make the world better. If you are ignoring them, that's on you.



> And yet someone is building all those super advanced computers and AI models. Someone is launching reusable rockets into space. Someone is building mRNA vaccines and F1 cars and humanoid robots and more efficient solar panels.

Which is true. But clearly far fewer people work doing that than in advertising or some other seemingly meaningless grunt work. And I’m including the technological plumbling work with many on this site, myself included, have depended upon to support themselves and/or a family.

Which at best is effectively doing minor lubrication of a large and hard to comprehend system that doesn’t seem to have put society as a whole in a particularly great place.



Which do you think is more important? Putting man on the moon or ecommerce? I reckon you been able to get on a device, see a biscuit ads, order one from foo.com and have it shipped to you. Think of how much tech it takes for that to happen, that is more tech than NASA built to send many to the moon, the internet, packet switching, routing, fiber optic, distributed systems, web servers, web browsers, ads, cryptography, banking online, and so on and so forth. We love to trivialize what is common, but that clicking on an ad is not an easy problem. Clicking on ads has generated enormous wealth in the world which is now bootstrapping AGI.

Clicking on ads helped with our goal to AI today. Showing you the right ad and beating those trying to game it is machine learning heavy. When was the first time we started seeing spelling correction and next word suggestions? It was in google search bar. To serve the correct ads and deal with spam? heavy NLP algorithms. If you stop and think of it, we can drop a think line from the current state of LLMs to these ads click you are talking about.



It took way too long to convince myself this wasn't satire. I still wish it wasn't.

It made me realize that I think many computing people need more of a fundamental education in "hard" physics (statics, mechanics, thermodynamics, materials science) in order to better understand the staggering paradigm shift that occurred in our understanding of the world in the early 20th century. Maybe then they would appreciate how much of the world's resources have now been directed by the major capital players towards sucking the collective attention span of humanity into a small rectangular screen, and the potential impact of doing so.



"Putting man on the moon or ecommerce"

The comparison here is between moonlanding and advertisement. So I choose the moon obviously.

Ecommerce can work just the same without LLM augmented personalized ads, or no advertisement at all. If a law would ban all commercial advertisement - people still need to buy things. But who would miss the ads?



They are clearly talking about one aspect of the industry which is the marketing part related to maximising engagement. It is not meant to be conflated with the e-commerce industry as a whole.



> Is any of that actually inspiring to a young child?

I think the answer is pretty clear in the fact that so many of them, bluntly speaking, just don’t give a shit any more. I absolutely don’t blame them.



Interesting. In my experience, advertisement and the incentives around it have led to the most devastatingly widespread removal of value in human culture and social connections that we've seen in this generation. Huge amounts of effort wasted on harvesting attention, manipulating money away from people, isolating and fostering extremism, building a massive political divide. And centralizing wealth more and more. The amount of human effort wasted on advertisement is staggering and shocking.

I don't think your average adult is inspired by the idea of AI generated advertisements. Probably a small bubble of people including timeshare salesmen. If advertisements were opt-in, I expect a single digit percentage of people would ever elect to see them. I don't understand how anybody can consider something like that a net good for the world.

How does non-consensually harassing people into spending money on things that don't need add value to all the world's citizens?



"Adding value" and "Generating wealth" are always the vague euphemisms that these guys fall back to when they try to justify much of today's economic activity. Adding value for who? Generating whose wealth? The answer is usually "people who are already wealthy." Of course, they'll downplay the massive funneling of wealth to these people, and instead point to the X number of people "lifted out of poverty in the 20th century" as if capitalism and commerce was the sole lifting force.

I wish some of these people would think about how they'd explain to their 5 year old in an inspiring way what they do for a living: And not just "I take JSON data from one layer in the API and convert it to protobufs in another layer of the API" but the economic output of their jobs: "Millions of wealthy companies give us money because we can divert 1 billion people's attention from their families and loved ones for about 500 milliseconds, 500 times a day. We take that money and give some of it to other wealthy companies and pocket the rest."



> If advertisements were opt-in, I expect a single digit percentage of people would ever elect to see them.

I mean, you'd see the same thing if paying for your groceries were opt-in. Is that also a net bad for the world? Ads do enable the costless (or cost-reduced) provision of services that people would otherwise have to pay for.



> I mean, you'd see the same thing if paying for your groceries were opt-in.

Is that seriously the comparison you want to make here? Most of us think the world would be better if you didn't have to pay for food, yes.



> you'd see the same thing if paying for your groceries were opt-in.

Groceries are opt-in. Until you realize you don't want to hunt and cook your own food, then you opt back in for survival.

UBlock origin + some subscriptions show I'd definitely would love to opt out of IRL ads.

>Is that also a net bad for the world?

World, yes. We have to tech to end food scarcity, but poor countries struggle while rich countries throw out enough food each day to feed said poor countries.



Ads are not charity. There is clearly a cost, otherwise they would lose money. They do not generate money out of thin air. "Generate" and "extract" aren't synonyms.

They do not enable any costless anything at all. They obfuscate extraction of money to make it look costless, but actually end up extracting significant amounts of money from people. Ad folks whitewash it to make it sound good, but extracting money in roundabout ways is not creating value.



I think this is a rationalization of an enormous waste of work. The effects generating wealth are indirect. In that regard you could argue that betting is generating wealth too. Advertising is like a hamster wheel people have to jump onto if they want their place in the market.

A similar amount of wealth would be generated if every advertised product would be represented by a text description, but we have a race to the bottom.

There is advertising and advertising of course but most of advertising is incredibly toxic and I would argue that by capturing attention, it is a huge economic drain as well.

Of course an AI would also be quite apt at removing unwanted ads, which I believe will become a reality quite soon.



> A similar amount of wealth would be generated if every advertised product would be represented by a text description, but we have a race to the bottom.

I fear statements like this go too far. I can't agree with the first part of this sentence.

I feel this about both marketing and finance:

They are valuable fields. There are huge amounts of activity in these fields that offer value to everyone. Removing friction on commerce and the activities that parties take in self-interest to produce a market or financial system are essential to the verdant world we live in.

And yet, they're arms races that can go seemingly-infinitely far. Beyond any generation of societal value. Beyond needless consumption of intellect and resources. All the way to actual negative impacts that clutter the financial world or the ability to communicate effectively in the market.



> enormous value added to all worlds citizens

This is quite a statement to make.

Please elaborate on what enormous value has spam ads and marketing emails added to _world_ citizens?

Unless of course by “world” you mean Silicon Valley venture capitalists..



In the grand scheme, what you’re talking about is very zero-sum, while stuff like making rockets is not. Uber vs Waymo is a good example of how adtech can only go so far in actually creating wealth.



I keep hearing the phrase "generate wealth" in regards to advertisement and from the mouths of startup founders, but in almost no other context. I'm not familiar with the economic concept of "wealth generation" or its cousin "creating value".

Is the idea that any and all movement of money is virtuous? That all economic activity is good, and therefore anything that leads to more economic activity is also good? Or is it what it sounds like, and it just means "making some specific people very wealthy"? Wouldn't the more accurate wording be that it "concentrates wealth"? I don't see a huge difference in the economic output of advertisement from most other scams. A ponzi scheme also uses psychological tricks to move money from a large amount of people to a small amount of people. Something getting people to spend money isn't inherently a good thing.



> Is the idea that any and all movement of money is virtuous?

Maybe this was your point, but this is built in to one of the definitions of GDP, isn’t it? Money supply times velocity of money?

I’m no economist though I’m sure there are folks on here who are. But this seems like an unfortunate fact that’s built into our system- that as laypeople we tend to assume that ‘economic growth’ means an increase in the material aspects of our life. Which in itself is a debatable goal, but our GDP perspective means even this is questionable.

For example, take a family of five living out in a relatively rural area. In scenario one, both parents work good paying remote tech jobs and meals, childcare, maintenance of land and housing, etc. are all outsourced. This scenario contrubutes a lot according to our economic definitions of GDP. And provides many opportunities for government to tax and companies to earn a share of these money flows.

Then take scenario 2, you take the same family but they’re living off of the grid as much as possible, raising or growing nearly all their own food, parents are providing whatever education there is, etc. In this scenario, the measurable economic activity is close to zero- even if the material situation could be quite similar. Not to mention quality of life might be rated far higher by many.

What rating an economy by the flow of its money does do is, and I’m not sure if this is at all intentional, is it does paint a picture of what money flows are potentially capturable either by government taxation or by companies trying to grab some percentage as revenue. It’s a lot harder to get a share of money that isn’t there and/or not moving around.

Perhaps my take on economics is off base but, for me, seeing this made me realize just how far off our system is from what it could and should be.



GDP is a measure. I'm very much not an economist, but I am extremely skeptical that the health of an economy can be reduced to any single number. Goodheart's law and all.

I concede that GDP is a good indicator, but I think you can have things that help GDP while simultaneously hurting the economy. Otherwise any scam or con would be considered beneficial, and it would make sense to mandate minimum individual spending to ensure economic activity. A low GDP inherently shows poor economic health, but a high GDP does not guarantee good health.

In my mind (noting, again, that I'm no economist), economic health is defined by the effectiveness of allocating resources to things that are beneficial to the members of that economy. Any amount of GDP can be "waste", resources flowing to places where they do not benefit the public. As Robert Kennedy famously pointed out, GDP includes money spent on addictive and harmful drugs, polluting industries, and many other ventures that are actively harmful.[0]

[0]: https://youtube.com/watch?v=3FAmr1la6w0



Going back to the previous posters monetary velocity statement, if you have a trillion dollar GDP, but it's just two AI's bouncing money back and forth high speed while all the humans starve in the street your economy is "great" and totally awful at the same time. The one number has to be referenced against others like wealth inequality.



"Generate wealth" means "make somebody's number go up" i.e. allocating real resources/capital somewhere, with the assumption that 1. allocating that capital creates a net boon for society and 2. those who have "generated wealth" are wise and competent investors/leaders and their investments will create a net boon elsewhere. The first point is probably not especially true very often in contemporary tech (other than 'job creation') and is arguably not true for advertisement. The second point is not really a given at all and seems to be pretty consistently shown otherwise.



The guy writes a post about how to send spam effectively, and then offers the subscription link in the end with "Promise we won't spam you". Yes, I totally trust you...



I keep seeing these posts on HN and thinking, man, these are some smart people. Training LLMs, doing all this amazing AI stuff like this guy with the email agents and the other guy with the dropping of hats, and then I open the posts and it's just some guy making API requests to OpenAI or some similar disappointment.



When “altcoins” took off I spent a while racking my brain trying to figure out what special tech I could offer, how I could build my own blockchain, incentivize miners…

When I realized it was just dudes copy-pasting a “smart contract” and then doing super shady marketing, it was already illegal in my jurisdiction.



Nowadays, an "AI Expert" is someone who knows how to download an AI client lib and prompt the AI to perform tasks. These are people who are not even technical and have no idea how all this works, but they can at least follow a Youtube Tutorial to get a basic website working.



As someone who actually has a university degree in Artificial Intelligence, I feel like this is always how it's been. Before, an "AI Expert" was someone who knew how to use Tensorflow, PyTorch or Keras. Before that, an "AI Expert" was someone who knew how to write a Monte Carlo simulation, etc etc.

You could of course say the same for frontend engineer or backend engineers. How many frontend engineers are simply importing Tailwind, React, etc? How many backend engineers are simply importing apache packages?

Where do you draw the line? Can you only be an AI expert if you stick to non-LLM solutions? Or are AI experts the people who have access to hundreds of millions of USD to train their own LLMs? Who are the real AI experts?



I would liken it to cars. There is a difference between engineers, mechanics, and mechanics that know a certain car so well that they fabricate parts that improve upon the original design.



Good comparison. Engineers who build cars and understand their intricacies oftentimes just work on one small thing at a time, even in teams. Like a team just working on breaks. The mechanics can piece the stuff together and keep it working in a real world setting. But nowadays a self-declared "AI Expert" in that metaphor might be just some person who knows how to drive a car.



If you think back to when cars were introduced, knowledge of how to drive a car was actually a rare skill! People weren't born with that inherent knowledge, so someone who could operate a vehicle (and do some basic maintenance) was an expert.

Nowadays, that would be laughed at. But AI is more comparable to cars from 1900 than modern vehicles.



And they were a leadgen/SEO expert a few years ago. These technogrifters just move from one hot topic to the next trying to make whatever buck they can smooth talk people into giving them.



well, no one's going to be talking about the secrets behind LLM while the market is paying billions to own their slice of the pie.

And in reality, most software work is 1) API calls and 2) applied math. If you're not in cutting edge private tech or acedemia, your work probably falls into 1 or both categories. Modern "Software engineers" is more a matter of what scale of APIs you're wrangling, not how deep of domain knowledge you have.



I can't see this working long term though. Being endlessly positive and ignoring your actual competence sounds like a recipe to eventually bite off more than you can chew.



Oftentimes this is fervor is channeled into personal brand building, which rarely has any sort of feedback mechanism that is tied to actual competence.

It's a calculated move on their part.



Brand building actually sounds good and productive to me, as long as it doesn’t approach fraud.

If your audience likes your brand and doesn’t distinguish between your services and services done by more competent providers, then you’ve found your niche. So: snake oil is not fine; but Supreme branded brick sounds ok to me, even if I wouldn’t buy it myself.

I guess the author will find followers who enjoy that approach to software and product growth. If spamming wasn’t part of it, I’d be ok.



Expect to see someone else write a blog post on How I Used AI to fool an AI Spammer

...of course they'd probably get an LLM to write the article too.



This process should not require a human in the loop.

Consider:

* spammers have access to large amounts of compute via their botnets

* the effectiveness of any particular spam message can easily be measured - it is simply the quantity of funds arriving at the cryptocurrency wallet tied to that message within some time window

So, just complete the cycle: LLM to generate prompts, another to generate messages, send out a large batch, wait some hours, kick off a round of training based on the feedback signals received; rinse, lather, repeat, entirely unattended.

This is how we /really/ get AI foom: not in a FAANG lab but in a spammer's basement.



That’s one of Peter Watt’s Rifters trilogy, I think maybe the second one? Been a few years since I read them. I think it’s a biological neural net, not an Ai per se. Lots of big ideas in those books, but not a lot of optimism and some rough stuff.



Very well could be. Seconded. After all, it could very well become one of the largest vehicles for "mass training", ever ...

PS. Howerver, see comments downthread about "survivorship bias". Not everybody will reply, so biases will exist.-



This is sort of why I feel somewhat pessimistic about AI - the inevitable most profitable usecases being so bad in aggregate for a society with almost no bounds or values other than profit. It will never be easier to waste peoples attention.



"No other values"? When and how is such Doomer Hyperbole getting into HN articles?

This is half of major reddit subs now and I fear the same low quality comments will take over HN.

People need to go out and touch some grass.



While that might work great on the individual level for a little while, it's unfortunately not how normalized taboos seem to work long-term. You're just going to see more and more people who don't deserve your time until you're wanting for anyone who actually does.



This has been mentioned before, but I can see the benefit in having curated webrings and similar listings. Where people can verify the content is not LLM generated.



As soon as that becomes effective, you'll have dozens of SEO sites and experts giving seminars on "How to get your LLM-generated website into curated webrings." An entire cottage industry will spring up for the purposes of corrupting legitimate webrings and/or creating fake astroturf webrings that claim to be curated.



> You're just going to see more and more people who don't deserve your time until you're wanting for anyone who actually does.

I can see it, perhaps positively, investing far less importance and effort into online things. With admittedly a lot of optimism, I could see it leading to a resurgent arts and crafts movement, or a renewed importance put on hand-made things. People say "touch grass"; maybe AI will make people "touch crafts" (bad joke, I know).



I think it depends on the context. I think there's artistic cases for it, for example I've played around with using AI tools to extract speech from its background music for use in further (non AI-based) music which I don't think is an unethical thing to do.



It's already like this for creative communities in things like illustration and writing. You will (rightly) get ostracized and blocked by your peers for using AI. It's a signal for poor quality for most people in those spaces.

Definitely interesting to see the different culture in tech and programming since programmers are so used to sharing code with things like open source. I think programmers should be more skeptical about this bullshit, but one could make the argument that having a more flexible view of intellectual property is more computer native since computers are really just copying machines. Imo, we need to have a conversation about skills development because while art and writing accept that doing the work is how you get better, duplicating knowledge by hand in programming can be seen as a waste of time. We should really push back on that attitude though or we'll end up with a glut of people in the industry who don't understand what's under all the abstractions.



News at 11, spammers use sophisticated techniques to increase the profitability of spam. This is absolutely shocking and never before seen, what is the world coming to.

In all seriousness, manipulation and bullshit generation emerges as the single major real world use of AI. It's not good enough yet to solve the big problems of the world: medical diagnostic, auto accidents, hunger. Maybe just a somewhat better search tool, maybe a better converational e-learning tool, barely a better Intellisense.

But, by God, is it fantastic at creating Reddit and X bots that amplify the current line of Chinese and Russian propaganda, upvote and argue among themselves on absurd topics to shittify any real discussion and so on.



> X bots that amplify the current line of Chinese and Russian propaganda...

Do you think those countries are the only ones doing this? Just the other day there was a scandal about one of the biggest Swedish parties, one that's in the government coalition, doing exactly this. And that's just one that got caught. In countries like India and Brazil online disinformation has become an enormous problem, and I think that in the USA and Europe, as the old Soviet joke went: "Their propaganda is so good their people even believe they don't have any".



Sure, but I'm talking about the good:bad ratio of some creations. I really have strong hope for AI, and that we won't regard it in retrospect like the multi-stage thermonuclear device, the landmine or tetraethyl lead additives.



>How much of our societal progress and collective thought and innovation has gone to capturing attention and driving up engagement, I wonder.

trillions, easily. People wanna sell you stuff, and they will pay to get your eyeballs. doesn't matter if it's to sell you a candy bar or to enlist you into the military. Even non-profit/charities need awareness. They all need attention and engagement.



Also from that blog post:

> As founder, I'm always exploring innovative ways to scale my business operations.

While this is similar to what other founders are doing, the automation, scale and the email focus puts it closer to spam in my book.



Funny how they're self assured no one whiffed their AI bullshit. This is survivorship bias, he's looking only at all the planes that came back to port. The people who did - they just didn't reply. He can't prompt them.



It is not only that too much is wasted on superficial nothing instead of choosing to make something with essence and benefitial for the society but it is sucking away those minds engaged in really useful things.



I do believe that commodified attention is the most logical currency of a postascarce society, so best case... quite a lot.

Note my 'best case' scenario for the near future is pretty upsetting.



> The technical part surprised me: they string together multiple LLMs which do all the work. It's a shame the author's passions are directed towards AI slop-email spam, all for capturing attention and driving engagement.

In defence of that guy, he's only doing it because he knows it's what pays the bills.

If we want things to change, we need to fix the system so that genuine social advancement is what's rewarded, not spam and scams.

Not an easy task, unfortunately.



> How much of our societal progress and collective thought and innovation has gone to capturing attention and driving up engagement, I wonder.

Facebook + Instagram is $100B+ business, So is Youtube and Ads.

An average human now spends about ~3h per day on their screens, most of it on social media.

We are dopamine driven beings. Capturing attention and driving up engagement is one of the biggest part of our economy.



The future - megawatts of electricity being used, 24/7 as armies of LLMs email and debate each other, and try to sell each other programs at a great discount.

As for the humans, we went fishing instead.



The irony

Everyone is playing lip service to global warming, energy efficiency, reducing emissions.

At the same time data centers are being filled with power hungry graphic cards and hardware to predict if showing a customer an ad will get a clock, generating spam that “engages” users aka clicks.

It’s like living in a episode of black mirror.



I would tend to agree with them even without actual data. Just probabilistically there is likely some overlap.

Whether there's enough for calling it irony is probably a different question.



I see the bright side, the tech for large scale computing gets mass produced - so all the legit use cases, like scientific simulations, or LLM for productive work, also profit. And if one really bright day humanity evolves beyound the current statd of ad driven everything, we can put all of it to use for real.

Till then, I will probably avoid more and more communicating with strangers on the internet. It will get even more exhausting, when 99% of them are fake.



I disagree.

Datacenters save a lot more energy than they make. Alone how much co2 is saved when i can do my banking online instead of having to drive to a bank is significant.

The same with a ton of ohter daily things i do.

Is video producing co2? yes. But you know what creates a lot more co2? Driving around for entertainment.

And the companies running those GPUs actually have an incentive to be co2 neutral while bitcoin miners don't: They 1. already said they are doing / going co2 neutral due to 2. marketing and they will achieve it becauseh 3. they have the money to do so.

When someone like Bill Gates or Suckerberg say 'lets build a nuclear power plant for AGI' than they will actually just do that.



>Is video producing co2? yes. But you know what creates a lot more co2? Driving around for entertainment

What's more likely, watching a movie online, drive to watch a movie in a cinema?

You know what creates a lot less CO2? Staying at home reading a book vor playing a board game.

>Datacenters save a lot more energy than they make

I think you mean CO2. And I doubt that they actually save anything because datacenters are convenient so we use them more as alternatives with less convenience.

Like the movie example, we watch more and even bad movies if it's just a click on Netflix than we do if we have to drive somewhere to watch.

MS recently announced they fail der CO2 target but instead produce 40% more because of cloud services like AI



Have you checked how much co2 a normal car drive creates vs. watching a movie online?

We need to be realistic here. We know what modern entertainment looks like and its not realistic at all to just 'read books' and play board games.



But it's realistic that we watch movies online than in cinemas. And don't forget the datacenters of the movies need to run even if no one watches. My car doesn't produce CO2 whe I don't drive.



It is 100% realistic to read books and play board games. Both markets are massive, and board games in particular are having what I would consider a renaissance. Maybe it depends on your crowd, but everybody I know plays tabletop games and reads books.



You're missing the point. What's not realistic is to tell everyone that they should abstain from any type of entertainment that requires power (TV shows, movies, video games, etc) and should only read books and play board games instead. I don't care what kind of renaissance board games are undergoing, most people still only play the mass market classics, and then only rarely.

I don't know how much energy Netflix uses serving a movie, but playing a video game on my PC for two hours where I'm located might generate a kg of CO2. That's about as much as I'll breathe in a day. Relative to other sources of atmospheric CO2 I'm not that concerned.



You are missing the point too.

Driving too the cinema to watch a movie produces more CO2 than watch one movie online but online makes it more convenient so you watch more. That sums up to more CO2 emission.

The point is that higher efficency is wortless in terms of CO2 emissions if it leads to higher usage that compensates for the savings.

If a programmer can program faster with AI it's good if he only needs 1 hour instead of 8 but if he still programs 8 hours a day AI's energy consumption comes just on top of his previos consumption.

Climate change doesn't care how efficient you produce more CO2, more is simply more.



My issue was with "we know what modern entertainment looks like" as if humans are now incapable of enjoying themselves without a screen. And you should care about a massive market increase when it's directly relevant to the point at hand. If the initial point was "we know what modern entertainment looks like, nobody plays board games or reads books", pointing out that the board game market has more than doubled in the past decade is far from irrelevant. It actually directly counters the point.

I agree with your second paragraph, and selling the "make better choices to save the world" argument is an industry playbook favorite. Environmental damage needs to be put on the shoulders of those who cause it, which is overwhelmingly industrial actors. AI is not useful enough to continue the slide into burning more fossil fuels than ever. If it spurs more green energy, good. If it's the old "well this is the way things are now", that's really not good enough.



AI and ML will help a lot of people and already does. Alpha Fold / protein folding will help us with cancer.

We will have better batterires thanks to ml material research.

We will be able to calculate and optimize everything related to flow like wind.

The last thing we need to optimize is compute and compute is what has the most money anyway. One of the first industries going green is datacenters. Google for example is going green 24/7 (so not just buying solar power but pulling green energy from the grid 24/7 through geo thermy and others).

AI/ML big datacenters are crucial for all the illneses we have which no one cares enough to solve. For example, i have one of these and we need data to make a therapy for this and i'm not alone.



For most of your points it's may not will.

How many battery breaktroughs did we have before AI? They rarely lead to new batteries.

>AI/ML big datacenters are crucial for all the illneses we have which no one cares enough to solve.

Too bad that companies like OpenAI and MS buy most of the hardware for their data centers to write summaries of articles and emails and to create pictures.

And even if they find a cure, doesn't mean it will be available for people in need, not without a hefty fee.

Just look at the profit margin of insulin.



The normal miners never said that. They just say this at conferences for simple greenwashing.

The normal miner doesn't go to those bitcoin conferences, they buy asics, put them in some warehouses around the world and make money.



> how much co2 is saved when i can do my banking online instead of having to drive to a bank is significant.

And if the online bank wasn't sending a bunch of requests to a bunch of third party ad networks on every click, it would save even more.



This is a very limited perspective. There are many parts of the world not beholden to automobiles for transportation. Where I live, I can walk to the bank, and walk or ride a bike to entertainment. The alternative to data centers does not have to be driving an automobile somewhere.



I think it's more nuanced than that. I used to walk to my bank, I can't do that any more because many branches closed. The bank now directs all interactions to happen via their app. In terms of emissions (and social interaction, particularly for vulnerable and isolated members of society) I think this is bad news.

But this is a complex calculus and - frankly - feels like a distraction from the issue. I don't want to get into the weeds of calculating micro-emissions of daily activities, I want climate responsibility and reduction in energy consumption across the board.



People cry about Bitcoin's energy usage now, imagine the amount of energy burned to create next-level spam with "AI".

Flame me all you want, but this is one case where Bitcoin is much more useful than LLM. If it doesn't create value, as its naysayers claim, at least it allows exchanging value. LLMs on the other hand, burn electricity to actively destroy the Internet's value, for the profit of inept and greedy drones.



Yes, that's quite right.

That's why I created EtherGPT, an LLM Chat agent that runs decentralized in the Ether blockchain, on smart contracts only, to make sure that value is created and rewards directly the people and not big companies.

By providing it just a fraction of just a bit north of 10% of the current fusion reactions occuring in our sun, and giving it a decade or two on processing time and sync, you can ask it simple questions like "what do dogs do when you're not around" and it will come up with helpful answers like "they go to work in an office" or funny ones like "you should park your car in direct sunlight so that your dog can recharge its phone using solar panels".



Something must be very wrong with someone who continuously laughs at computer jokes so I don't think it will ever reach the level you are expecting (hopefully).



Bitcoin has one application where as there are multiple applications of LLMs. There might be mountains of noxious AI spam but it's hard to claim that Bitcoin as a technology is more useful.



It is not about the quantity of the applications, but about the value they bring to society. If it is about spamming and advertising we are even talking about negative value, actually.



So far, I haven't seen a useful application of LLMs. So far.

I've seen things that are wildly hobbled, and wildly inaccurate. I've seen endless companies running around, trying to improve on things. I've seen people looking in wonder at LLMs making mistakes 2 year olds don't.

Most LLM usage seems to be in two categories. Replace people's jobs with wildly inaccurate and massively broken output, or trick people into doing things.

I'd have to say Bitcoin is far more useful than LLMs. You have to add the pluses, and subtract the minuses, and in that view, LLMs are -1 billion, and bitcoin is maybe a 1 or 2.



AI is not just LLMs. AlphaFold for example moved a critical goal post for everyone of us.

bitcoin is only negative. It consumes terrawatts of energy for nothing.



I'm as skeptical about LLMs as anyone, especially when people use them for actual precision tasks (like coding), but what they actually IMHO are good at are language tasks. That is, summarising content, text generation for sufficiently formulaic tasks, even translation to an extent, and similar things.



There is one clear (albeit somewhat boring) application of LLM: data extraction from structured documents.

That field has made a leap forward with LLMs.

Positive impact on society includes automated extraction in healthcare pipelines.



Healthcare pipelines! All well and good until hallucinations cause death or what not!

And why is this better than employing a human. Or reducing complexity. It's not as if human wages are what causes hyper expensive US healthcare costs.

This seems like a negative.



No, I really meant structured. Extracting data from structured documents is surprisingly hard when you need very high accuracy.

What I mean by structured is: invoices, documents containing tables, etc.

Extracting useful data from fully unstructured content is very hard IMO and potentially above the capacity of LLMs (depending on your definition of "useful" and "unstructured")



But this is why I made my complexity statement in my other reply.

Why are firms sending around invoices, tables instead of parseable data. Oh I know the argument, because "so hard to cooperate" on standards, etc.

Madness.



Partly because the standards, such as X12, have a high startup cost to use them, they aren't very opinionated about the actual content, and you have to get the counterparty on board to use them.



> So far, I haven't seen a useful application of LLMs. So far.

What?! Whole industries have been changed already due to products based on them. I don't think there's a single developer who is not using AI to get help while coding, and if you aren't, sorry but you're just missing out, it's not perfect but it doesn't need to be. It just needs to be better than StackOverflow and googling around for the docs or how to do things and ending up in dubious sites, and it absolutely is.

My wife is a researcher and has to read LOTS of papers. Letting AI summarize it has made her enormously more efficient at filtering out what she needs to go into more detail.

Generating relevant images for blog posts is now so easy to do (you may not like it, but as an author who used to use irrelevant photos before instead, I love it when you use it tastefully).

Seriously, I can't even believe someone in 2024 can say there has not been useful applications of LLMs (almost all AI now is based on LLMs as far as I know) with a straight face.



> I don't think there's a single developer who is not using AI to get help while coding

You are in a bubble.

> It just needs to be better than StackOverflow and googling around for the docs or how to do things and ending up in dubious sites, and it absolutely is.

Subjectively. Not absolutely.



> I don't think there's a single developer who is not using AI to get help while coding

It's banned at my company due to copyright concerns. Company policy at the moment considers it a copyright landmine. It does need to be "perfect" at not being a legal liability at the very least.

And the blog post image thing is not a great point. AI images for blog posts, on the whole, are still quite terrible and immediately recognizable as AI generated slop. I usually click out of articles immediately when I see an AI image at the top, because I expect the rest of the article to be in line: low value, high fluff.

There are useful LLM applications, but for things that play to its strengths. It's effectively a search engine. Using it for search and summarization is useful. Using it to generate code based on code it has read would be useful if it weren't for the copyright liability, and I would argue that if you have that much boilerplate, the answer is better abstractions, libraries, and frameworks, rather than just generating that code stochastically. Imagine if the answer to assembly language being verbose was to just generate all of it rather than creating compiled programming languages.



AI solves gigantic issues and helps us with cancer, protein folding, potentially math and other studies, material science etc.

Bitcoin consumes as much energy as a country and has basically done nothing besides moving money from one group of people to a random other group of people.

And bitcoin is also motivated to find the cheapest energy independent of any ethical reasoning (taking energy from cheap chinese hydro and disrupting local energy networks) while AI will have energy from the richest companies in the world (ms, google, etc.) which already working on co2 neutral 24/7.



None of your problems in the first sentence are solved by LLMs. I do not dispute AI research and applications and their benefits, but the current LLM and GenerativeAI hype is of no value to hard scientific problems. Otherwise I agree with you.



The benefit is all for naught if it undermines the fabric of society at the same time. All these benefits will only go to the few who land on top of this mess.

It's continuing to widen the wealth gap as it is.



The wealth gap is widening while in parallel poorer people have better lives than ever.

We house, heat and give access to knowledge to a lot more people than ever before.

Cheap medical procedures through AI will help us all. The AI which will be able to analyse the x-ray picture from some 3th world country? It only needs a basic x-ray machine and some internet. The AI will be able to tell you what you have.

I'm also convinced that if AGI is happening in the next 10 years, it will affect that many people that our society has to discuss capitalisms future.



Bitcoin is literally turning greed into money, by means of wasting exponentially increasing amounts of electricity. It doesn't just not create value - to be able to allow exchanging value, it fundamentally requires ever increasing waste, as the waste is what gives its mathematical guarantees.

LLMs deliver value. Right here today, to countless people across countless jobs. Sure, some of that is marketing, but that's not LLM's fault - marketing is what it always has been, it's just people waking up from their Stockholm syndrome. You've always been screwed over by marketers, and Internet has already been destroyed by adtech. Adding AI into the mix doesn't change anything, except maybe that some of the jobs in this space will go away, which for once I say - good riddance. There are more honest forms of gainful employment.

LLMs, for all their costs, don't burn energy superlinearly. More important, for LLMs, just like for fiat money, and about everything else other than crypto, burning electricity is a cost, upkeep, that is being aggressively minimized. More efficient LLMs benefit everyone involved. More efficient crypto just stops working, because inefficient waste is fundamental to cryptos' mathematical guarantees.

Anyway, comparing crypto and LLMs is dumb. The only connection is that they both eat GPUs and their novelty periods were close together in time. But they're fundamentally different, and the hypes surrounding them are fundamentally different too. I'd say that "AI hype" is more like the dot-com bubble: sure, lots of grifters lost their money, but who cares. Technology was good; the bubble cleared out nonsense and grift around it.



Well said, too many people conflate AI and crypto, and dismiss both without understanding either. Crypto has demonstrated very limited benefit compared to its cost, exchanging value has been a solved problem for millenia. We're only beginning to understand what can be done with LLMs but we can see some limits. Although it causes some harm to say it doesn't create any value is ridiculous. We can't yet see if the benefits outweigh the cost but it looks to me like they will.



> You've always been screwed over by marketers, and Internet has already been destroyed by adtech. Adding AI into the mix doesn't change anything

This is pure, complacent nonsense. "We have always been surrounded with spam, 10x more won't change anything."

Yeah, why improve the status quo? Why improve the world? Why recycle when there's a big patch of plastic in the ocean.

It's an argument based on a nonsensical, cynical if not greedy position. "Everyone pollutes, so a little more pollution won't be noticed."



> It doesn't just not create value

Value is a subjective concept. One could argue that its value is that arbitrary quantities of it cannot be created by dictat.

> - to be able to allow exchanging value, it fundamentally requires ever increasing waste, as the waste is what gives its mathematical guarantees.

One could argue that it takes a lot worse to maintain any currency such as USD as a currency. Full force of government law enforcement will be unleashed on you if you decide to have your own currency. There is a lot of "wastage" that goes to safeguard currency creation and storage and to prevent counterfeiting.

I do not hold BTC. Nor do I trade it. But to discuss as if other currencies have no cost is not rational.



> There is a lot of "wastage" that goes to safeguard currency creation and storage and to prevent counterfeiting.

Yes. But the point I'm making is, none of that benefits from waste. The waste is something everyone want to reduce. With Bitcoin, the trend is uniquely opposite, because the crypto system is secured through aggregate waste being way larger than any actor or group can afford.



But we do know that the Proof of Stake system we currently have, is a lot cheaper and more advanced than what Bitcoin does.

Bitcoin doesn't solve any problem yet which is fundamental to our society and a fiat system like the trust issue:

If i exchange 1 bitcoin with you for any service or thing outside of the blockchain, i need the whole proof of stack system protection of our normal existing money infrastructure like lawyers, contracts etc.

And no smart contracts do not solve this issue.

What is left? Small amount of transactions per day with high fees 'but' decentralized infrastructure run by someone we all don't know aggregated probably in data centers owned by big companies.



Proof of Work is far superior to Proof of Stake in a network with absolute fairness (security) being fundamental. Satoshi himself said he could find no other way.

Compare energy spent on global hash rate to all energy spent by mining metals, physical banking, financial services middle persons, etc. if you want to talk about energy usage and make any kind of sense.



Yes, start comparing energy spend on bitcoin mining and the missing features. You will see that bitcoin already consumes a lot more energy than our proof of stake system.

What do you do when you want to exchange 1 bitcoin for 1 car and the person with the car doesn't give you the car after the 'absolut fairness/ security' of transfering bitcoin to their wallet? You go back to our Proof of Stake system. You talk to a lawyer. You expect the police to help you.

The smallest issue in our society is just transfering money from left to right. This is not a hard problem. And pls don't tell me how much easier it is to send a few bitcoins to africa. Most people don't do this and yes western union exists.

Or try to recover your bitcoins. A friend has 100k in bitcoins just doesn't know the password anymore.

What do you do when someone breaks into your home and forces you to give them your bitcoin key? Yes exactly anonyms moving of money from you to them. Untraceable, wow what a great thing to have!

And no Satoshi 'himself' is not an expert in global economy. He just invented bitcoin and you can cleary see how flawed it is.



> Compare energy spent on global hash rate to all energy spent by mining metals, physical banking, financial services middle persons, etc. if you want to talk about energy usage and make any kind of sense.

you're ending up with the entire rest of civilisation on the other side of that

* Bitcoin, 0.5% of all energy use: 7 transactions per second total worldwide

* THE ENTIRE REST OF CIVILISATION AND EVERYONE IN IT AND EVERYTHING THEY DO, 199x the energy use, really quite a lot more than 1,393 transactions per second worldwide, and all the other stuff civilisation does too

What an amazing comparison for you to suggest.



You are not comparing apples to apples. BTC is comparable to gold or US treasuries. How often do you transact in physical gold? What is time taken from a piece of gold in your pocket to cash to coffee? However, you can transact in paper gold eg the ETF GLD in microseconds with comparatively much lower transaction costs (settlement is still not immediate). How often do you transact in treasury bonds? Try paying for a coffee with your treasury bond. Let’s see how many days that takes. Comparison with USD (ultimately representing US treasuries) on number of transactions basis is not useful.



I'd disagree to a large extent, because the specific similarities are important:

* the VCs are often literally the same guys pivoting

* the promoters are often literally the same guys pivoting

* AI's excuses for the ghastly electricity consumption are often literally bitcoin excuses

I think that's an excellent start on the comparison being valid.

Like, I've covered crypto skeptically for years and I was struck by just how similar the things to be said about the AI grifters were, and my readers have concurred.



I look forward to the dream job of writing LLMs that argue with strangers on the internet as opposed to the current dream job of improving ad click rates by 0.0016% per quarter.



In an optimistic POV of this, eh, why not?

if models handle my day to day minutia so I have more time, why the hell not...

(I know this is very optimistic POV and not realistic but still)



Because spam is incredibly selfish.

You're trying to take the time and attention of as many people as possible, without regard for whether or not they'll benefit.

One safeguard people have is knowing that it costs something to send in some way to contact them. I'm this case, the sender's time and attention. LLM spam aims to foil that safeguard,. intentionally.



I’m actually thrilled by this, as it means all the hack marketers that spam my inbox incessantly with whatever product they’re hucking - this time for sure perfect for my business, in spite of the fact I’ve ignored their last ten emails - are all out of a job, and good riddance.

The author sounds unfamiliar with this brand of marketing email, so I can see why it would come off disquieting to find it’s all AI - but it’s equally annoying from a human.

At least with AI sending this crap nobody can use these emails to justify their sales bonus.



How do you know it isn't exactly the same people, with zero reduction in headcount?

Designing the content of spam e-mails sounds like a small aspect of the "job".

If AI spams start fooling people more reliably, that's not something to celebrate.

This blogger thought, at first, that it came from an actual reader. I can't remember the last time I thought that a spam was genuine, even for a moment. Sometimes the subject lines are attention-getting, but by the time you see any of the body, you know.



If you do nothing that is discernible from noise (be that manually or through AI), unless your explicit goal is to generate noise, your ROI is 0.

Sure, AI spam can severely disrupt peoples attention by competing with "real" people more competently. But people will not have twice the attention. We will simply shut down our channels when the number of real-person-level-ai-spam goes to infinity, because there is no other option. Nobody will be fooled, very quickly, because being fooled would require super human attention.

Granted, that does not seem super fun either.



> If you do nothing that is discernible from noise (be that manually or through AI), unless your explicit goal is to generate noise, your ROI is 0.

We're talking about a group of people whose core skill is convincing people to pay for stuff that isn't worth it. You and I may know they're worthless, but that doesn't mean they're not getting paid.



Let's assume you have a mom that loves you very much and she let's your know by text on a semi-regular basis. She asks you to come by on Friday. That might seem like a nice idea to you. You reply yes, and you go.

Now, imagine you got messages from what appears to be not 100 but, oh I don't know, 1 000 000 000 000 000 of the very best moms that have ever existed.

And they all do love you so very much. And they do let you by writing these most beautifully touching text messages. And they all want to meet up on Friday.

What is going to happen next? Here is what is not going to happen: You are not going to consider meeting any of them Friday, any week. You will, after the shortest of whiles, shut down to this signal. Because it's not actually a signal anymore. The noise floor has gone up and the most beautifully crafted, most personalized text messages of all time are just noise now.



I don't know what you're trying to say. The people making payroll decisions have the same amount of people under them as they always did.



We all get to have only one mom and moms dont live forever.

So once someone’s mom passes away, you can’t really fool them with 1 or dozens of message from other moms anyway.



The emails are discernible from noise though. They literally have a signal to noise ratio higher than one. Noise would be pure rng output. So I don’t know what you’re getting at



No, I'm not being over-literal. Here's why:

Yes, it could be that for you a given advert is irrelevant or not worth your while, but the point he was making is that it won't even be worth it for the advertiser to put out the advertisement because it will be noise for everyone.

However, there is only one kind of noise that is noise for everyone: literal noise.

So long as the spam is about something, it is relevant to someone, and therefore it does not necessarily have zero ROI.

EDIT: The only kind of noise that has no semantic is actual "mathematically pure noise" as the person below commented (/u/dang banned my account so I can't reply)



> However, there is only one kind of noise that is noise for everyone: literal noise.

I feel like you're a bit too literal here. When people talk about noise it doesn't mean mathematically pure noise. A signal-to-noise ratio close to 1 is also colloquially called noise.



He is talking about semantic noise. Something that appears to have substance but is just slop actually. When everything is that. Then all email will become equivalent to slop. How could it not? Someone will be burned once or twice, but after that, there is a semantic phase shift.



"How could it not?" There are ways.

Consider that we have fairly decent anti-spam measures which do not look at the body of a message. To these methods, it is irrelevant how cleverly crafted the text is.

I reject something like 80% of all spam by the simple fact the hosts which try to deliver it do not have reverse DNS. Works like magic.

E-mail is reputation based. Once your IP address is identified by a reputation service as being a source of spam, subscribers of the service just block your address. (Or more: your entire IP block, if you're a persistent source of spam, and the ISP doesn't cooperate in shutting you down.)

To defeat reputation based services driven by reporting, your spams have to be so clever that they fool almost everyone, so that nobody reports you. That seems impractical.

How AI spammers could advance in the war might be to create large numbers of plausible accounts on a mass e-mail provider like g-mail. It's impractical to block g-mail. If the accounts behave like unique individuals that each target small numbers of users with individually crafted content (i.e. none of these fake identities is a high volume source), that seems like a challenge to detect.



These IP blocklist services also have a reputation of their own: if you are trying to send legitimate mail, there's a good chance your IP is on several of these blocklists for reasons you have nothing to do with. You can only remove it by grovelling and paying lots of money (extortion). So using one of them will cause you to reject legitimate mail.



What is "just slop" though? A spam advert for a product is still an advert for a product. Therefore it's not just semantic noise, it is still an advert for a product, and therefore his point is invalid: there is an ROI and people will continue to be employed to do it



> A spam advert for a product is still an advert for a product. Therefore it's not just semantic noise, it is still an advert for a product

Ergo slop and semantic noise.

Companies that used adverts which weren't noise went out of business long ago.



Except I am still discussing the point: the companies won't stop getting an ROI because "slop" still produces an ROI, even if people know it's slop, because it isn't contentless noise, it has semantic content.

Just because you and the others don't understand what point I'm making doesn't mean the conversation is "logjammed". I am still discussing the overall point, you just don't see it.



The problem is never what one person or one company is doing.

But when everyone copies what that one person or one company is doing. Software makes the copying process dead easy.

Once the herd starts stampeding, it creates a secondary effect of an arms race for finite Attention of a finite target audience. That assault and drainage of that finite attention pool, happens faster and faster and every one gets locked in trying to outspend the other guy.

An example currently is Presidential Campaigns furiously trying to out fund raise each other. Its going to top 15-17 billion this year. All the campaign managers, marketers, advertisors make bank. And we know what quality of product the people end up with. Cause why produce a high quality product when you can generate demand via Attention Capture.

The chimp troupe is dumb as heck as a collective intelligence.



From the spammer blog post [1]: "I spent hours trying different data sources", "a lot of time was spent on find-tuning the tone and structure of the email", "It took multiple tries to finally have the agent write emails in different language", etc. This won’t put marketers out of a job, but will greatly improve their tooling and enable more people to do the same thing with even less qualification.

[1]: https://www.wisp.blog/blog/how-i-use-ai-agents-to-send-1000-...



I don't really think that AI is the central issue here. The issue is that Kurt, the founder of Wisp, is a liar.

He misrepresented himself as a big fan of all these blogs, who's read their posts etc. and that's how he achieved such a high response rate. In effect he deceived people into trusting him enough to spend their time on a response.

Now ordinarily this would be a little "white lie" and probably not a huge deal, but when you multiply it by telling it 1,000 times it becomes a more serious issue.

This is already an issue in email marketing. The gold standard of course is emailing people who are double opted in and only telling the truth, and if AI is used to help create that sort of email I don't really have a problem. There is basically a spectrum where the farther away you get from that the progressively more illegal/immoral your campaigns become. By the time you are shooting lies into thousands of inboxes for commercial purposes... you are the bad guy.

Sorry to say but the real issue here is Kurt has crossed an ethical line in promoting his startup. He did the wrong thing and he could have done it pretty effectively with conventional email tools too.



I look forward to the blog post of how a hacker uses AI to respond to AI generated leads and then have them play with each other....and then uses AI to create content for a Youtube channel fighting back against marketers using said AI.

These early days is ripe to make some quick cash before it all comes crashing down.



> and then uses AI to create content for a Youtube channel fighting back against marketers using said AI.

I'm skeptical: It's easier to create bullshit than to analyze and refute it, and that should remain true even with an LLM in each respective pipeline.

----

P.S.: From the random free-association neuron, an adapted Harry Potter quote:

> Fudge continued, “Remove the moderation LLMs? I’d be kicked out of office! Half of us only feel safe in our beds at night because we know the AI are standing guard for misinformation on AzkabanTube!”

> “The rest of us sleep less soundly knowing you have put Lord Bullshittermort’s most dangerous channels in the care of systems that will serve him the instant he makes the correct prompts! They will not remain loyal to you when he can offer them much more scope for their training and outputs! With the LLMs and his old supporters behind him, you’ll find it hard to stop him!”



> I’m actually thrilled by this, as it means all the hack marketers that spam my inbox incessantly with whatever product they’re hucking - this time for sure perfect for my business, in spite of the fact I’ve ignored their last ten emails - are all out of a job, and good riddance.

> ...

> At least with AI sending this crap nobody can use these emails to justify their sales bonus.

What weird, misplaced animus. You're happy some salesguy got fired, while his boss sends even more spam and possibly makes even more money due to automation?

Those hack marketers rate-limited this kind of spamming. Now things are about to get worse.



> [...] while his boss sends even more spam and possibly makes even more money due to automation?

Wouldn't the exact argument apply to that boss as well?



unless this is a big multinational spam organization probably the boss of the person sending the email is the highest up, but no matter what there will be someone on the top who does not get fired and will be able to reap all the rewards of the AI automation, at least until the AI revolution puts them up against the wall.



There's presumably heavier competition from other spammers, until everything is in equilibrium again. The wallets of potential spam victims only have so much total cash.



Some people don't realise how lucky they are that they are blessed by the cognitive lottery that affords them a brain and personality that lets them pursue an enriching and engaging career they feel is valued by society.

In classic HN style the original reply lacks empathy, and demonstrates a preference of machines over humans. Life goes on...



> In classic HN style the original reply lacks empathy, and demonstrates a preference of machines over humans. Life goes on...

That stereotype definitely rings true. Thank you for helping me put my finger on it!



Spam? Easy. Someone selling something? Spam! I might set up an automatic email responder that reads an emails contents, runs it through my own LLM, and if the email is trying to sell me something, auto reply with “fuck off!”



I'd rather delete/block it than reply/react to it at all. If you react, they know you exist and you are a valid target to re-target repeatedly, resold to other marketers.

Mark as SPAM or Block/Filter or Ignore.



Sadly I think it is illegal to sing up these addresses to every service known to you... Otherwise it would be interesting SaaS opportunity. Automatically sing-up spammers to any number of newsletters or contact forms...



When they're paying real money to scam you, wasting their time isn't a terrible idea. Like keeping the Microsoft virus scammers on the phone for an hour while you set up a virtual machine for them to remote into.



lol, I treat my email inbox like a dumpster that I occasionally search when I know there's something there that I need to retrieve. The spam has won, I have moved to chat platforms for my communication needs.



I get -no exaggeration- several hundred spams a day. I have an OG email address that was grabbed by spammers, since the days of Network Solutions (so it’s been awhile).

I maintain Inbox Zero, much of the time, and seldom have more than three or four emails in my client at any time.

I get there by being absolutely brutal about tossing emails.

I probably toss a couple of legit ones, from time to time, but I do have rules set up for the companies and people I need to hear from.

The thing that will be annoying, is when AI can mimic these. Right now, that stuff is generally fairly clumsy, but some of the handcrafted phishing emails that I get, are fairly impressive. I expect them to improve.

A lot of folks are gonna get cheated.

I do think that some of these Chinese gangs are going to create AI “pig butchering” operations, so it will likely reduce their need to traffic slaves.



Well, a lot of the scammers are actually slaves, trafficked into Myanmar boiler rooms, by Chinese Tongs.

If AI takes off for this stuff, the gangs are less likely to be kidnapping these poor schlubs.

So … I guess this would be a … positive outcome?

Not sure if AI zealots will be touting it, though.



some of the marketing spam is so low effort, I get addressed as "Dear {{prospect}}". It does make deleting the email easy though, since the preview of the first line allows me to filter pretty fast!



If this works those spammers will make more money and send more emails scamming more people. Maybe some politician would fall for soemthing like this, be public ally embarrassed and lose a lot of money and then something more will be done to address this spammers and scammers .



Why are you happy that people are out of a job here? You still suffer the ills of the product, now infinitely more incessant, at a marginal cost of $0.



I think it's reasonable to be happy that someone is not getting paid to do something you hate. In fact, if you're suffering unwillingly, you probably want as few people as possible to benefit.



I don't know which of "5 randos getting a living wage by spamming me" and "Altman getting filty rich by spamming me" is worse. I'm inclined to say the latter, though of course it's quite close.

Wish SV would stop thinking anything that makes money is great, no matter the crap it inflicts on people. Guess I'm asking for way too much.



I don’t think so. Marketers don’t send X amount of spam because X is the right amount of spam they want to send. They are limited by how much money they want to pay in salaries and management, which defines how many people they can hire to send spam.

If the people they employ today suddenly became twice as productive, the company wouldn’t fire half of them - they just would enjoy twice the profit. The same applies to AI.



Getting peed at a couple times a day isn't a problem if the pee-ees miss 99% od the time, right?

Small acts of malice are still acts of malice. Not everyone wants to live in a caveat emptor, dog-eats-dog society.



Having tried to start a business and known other business owners, I will die on this hill: sales and marketing are not "acts of malice". Without salespeople we wouldn't live in the world we lived today.

This is like the irrational hate some developers have for recruiters, despite them finding jobs for many people that they otherwise would never have known about.



Marketing is fundamentally aimed at changing people's opinions. This can be done

1. covertly (why do you need to do it covertly? Would people mind if they knew? Doesn't that indicate you're doing them a disservice?)

2. overtly, against people's will. (Again, doesn't that indicate you're doing them a disservice?)

3. overtly, with their consent (express or assumed). How often have you seen this happen?

The "indicates" vs "shows" distinction above deals with the edge case of "interacting with covert/unwanted marketing is actually good for them, even if they don't know it". I dare you to make that argument...



> This is like the irrational hate some developers have for recruiters

It's not like that. As a business owner, be honest with us and yourself: just how much of sales and marketing you did was just bullshit? Exaggerated claims bordering on lies? Manipulative patterns? Inducing demand?

Approximately all marketing is that. It is that because it works, and those who refuse to do it get outcompeted by those who don't. Doesn't mean the world should be like that, or that I'd like to be subjected to it.

I also question the "we wouldn't live in the world we lived today" bit. In a competitive environment, marketing is a zero-sum game[0]: there's only so many people around, with so much money and time available; most of the marketing spend ends up being used to cancel out the efforts of the competition, and that race can consume all surplus of a company. Red Queen's race and all.

--

[0] Or negative-sum, if you account for externalities.



> Having tried to start a business and known other business owners, I will die on this hill: sales and marketing are not "acts of malice". Without salespeople we wouldn't live in the world we lived today.

That's exactly the reason why we hate them.



The logic in 'Without salespeople we wouldn't live in the world we lived today' doesn't really support the point you are trying to make.

Consider that without thieves we also wouldn't live in the world we live today. That should not be read supporting theft, only an acknowledgement that it exists and that we have designed our lived environment in response.



And you're qualified to declare this because?

Are we supposed to silently suffer because capitalism says so?

Spammers and salespeople are pretty much on the same level as criminals in my book. Heck, whenever someone calls me for some sort of unsolicited survey or similar, I think "these people have such low standards, they would also sell heroin on the street if they had any source."



Suffering is "the state of undergoing pain, distress, or hardship".

Having to delete the occasional marketing or sales email that get past your spam filter is hardly any of these. Annoying or frustrating, yes. Suffering? Really?



I am absolutely serious. Any employment has opportunity costs: a person who writes and sends out cold call spam e-mail for 8 hours a day is a person who could be spending those 8 hours on something else, but isn't. Yes, switching jobs is not very easy, and it's stressful but humans, thankfully, are not (yet) a species of highly-specialized individuals, with distinct morphological differences that heavily determine the jobs they potentially can or can not do.

联系我们 contact @ memedata.com