AI enables new tools & features but in itself is not a product.
There's a good essay from Andrew Chen on this topic: Revenge of the GPT Wrappers: Defensibility in a world of commoditized AI models
"Network effects are what defended consumer products, in particular, but we will also see moats develop from the same places they came from the past decades: B2B-specific moats (workflow, compliance, security, etc), brand/UX, growth/distribution advantages, proprietary data, etc etc." [1]
Also check out the podcast with the team at Cursor/Anysphere for details into how they integrate models into workflows [2]
Moats are the logistic network that Amazon has.. ok spend $10bn over 5 years and then come at me - if I didn't sit still...
Moats are what Google has in advertising... ok, pull 3% of the market for more money than god and see if it works..
brand/ux is not a moat, it's table stakes.
light_triad [3 hidden]5 mins ago
Agreed UX can be easily copied, but brands are a moat for a number of (granted, psychological) reasons:
1. Status symbols - my Lambo signifies that my disposable income is greater than your disposable income
2. Fan clubs - I buy Nikes because they do a better job at promoting great athleticism, and an iPhone to pay double for hardware from 3 years ago
3. Visibility bias - As a late adopter I use whatever the category leader is (i.e. ChatGPT = AI, Facebook = the Internet)
What you describe sounds more like market power resulting from a monopoly
chrisin2d [3 hidden]5 mins ago
I think that UX cannot always be easily copied.
Technology enables UX. When the underlying technology is commodity—which is often the case—it's easy for competitors to copy the UX. But sometimes UX arises from the tight marriage of design and proprietary technology.
Good UX also arises from good organization design and culture, which aren't easy to copy. Think about a good customer support experience where the first agent you talk with is empowered to solve your issue on the spot, or there's perfect handoff between agents where each one has full context of your customer issue so you don't have to repeat yourself.
trash_cat [3 hidden]5 mins ago
One could aregue the "non-moats" together can accumulate into something considerable making it moat. A brand is definately a moat but in the minds of consumers. This is not something you can overcome easily, even if the product is inferior.
redeux [3 hidden]5 mins ago
Kleenex is probably a good example of this. Tissues are a commodity but nevertheless people will still pay more for Kleenex branded tissues. That feels like a moat to me.
carlmr [3 hidden]5 mins ago
>brand/ux is not a moat, it's table stakes.
Except for the technical advantage of M-series macs that's like all of the Apple moat. Apple brand and UX is what is selling the hardware.
They make the UX depend on the number of Apple devices you have, so a little bit of network effect. But that's mostly still UX.
mtkd [3 hidden]5 mins ago
An enterprise using RAG, fine tuning etc. to leverage their data and rethinking how RL and vector DBs etc. can improve existing ops ... is likely going to make some existing moats much better moats
If your visibility on current state of AI is limited to hallucinogenic LLM prompts -- it's worth digging a bit deeper, there is a lot going on right now
ptx [3 hidden]5 mins ago
What specifically is going on right now in AI that's not based on "hallucinogenic LLM prompts"?
rv3392 [3 hidden]5 mins ago
ML is still a thing. I believe that most AI research is still non-LLM ML-related - things like CNN+Computer Vision, RL, etc. In my opinion, the hype around LLMs has a lot to do with its accessibility to the general public compared to existing ML techniques which are highly specialised.
falcor84 [3 hidden]5 mins ago
Figure.ai's Helix: A Vision-Language-Action Model for Generalist Humanoid Control
Convolutional neural networks for image recognition and more generally image processing. They are much better than they were a few years ago, when they were all the rage, but the hype has disappeared. These systems improve the performance of radiologists at detecting clinically significant cancers. They can be used to detect invasive predators or endangered native wildlife using cameras in the bush, in order to monitor populations, allocate resources for trapping of pests, etc.
ML generally is for pattern recognition in data. That includes anomaly detection in financial data, for example. It is used in fraud detection.
Image ML/AI is used in your phone's facial recognition, in various image filtering and analysis algorithms in your phone's camera to improve picture quality or allow you to edit images to make them look better (to your taste, anyway).
AI image recognition is used to find missing children by analysing child pornography without requiring human reviewers to trawl through it - they can just check the much fewer flagged images.
AI can be used to generate captions on videos for the deaf or in text to speech for the blind.
There are tons of uses of AI/ML. Another example: video game AI. Video game upscaling. Chess and Go AI: NNUE makes Chess AI far stronger and in really cool creative ways which have changed high level chess and made it less drawish.
Well “AI” is a lot more than just generic text generators. ML (read: AI that makes money) is the bread and butter of all of the largest internet companies. There’s no LLM that can accurately predict user behavior.
And even if there was, the fast follower to the Bitter Lesson is the Lottery Ticket Hypothesis: if you build a huge, general model to perform my task, I can quickly distill and fine tune it to be better, cheaper, and faster.
Jerrrrrry [3 hidden]5 mins ago
> if you build a huge, general model to perform my task, I can quickly distill and fine tune it to be better, cheaper, and faster.
"better" = "smaller, more specialized, domain-specific"
blotfaba [3 hidden]5 mins ago
Forgot to mention teaming up with government so they can literally create and defend your moat with the entire armed forces
BobbyJo [3 hidden]5 mins ago
I now see AI as part 2 of the CPU evolution. I think there are lots of correlations we can draw on looking at it that way:
1) Lots of players enter at the start because there are no giant walled gardens yet.
2) Being best in class will require greater and greater capex (like new process nodes) as things progress.
3) New classes of products will be enabled over time depending on how performance improves.
There is more there, but, with regard to this post, what I want to point out is that CPUs were pretty basic commodities in the beginning, and it was only as their complexity and scale exploded that margins improved and moats were possible. I think it will play out similarly with AI. The field will narrow as efficiency and performance improves, and moats will become possible at that point.
epistasis [3 hidden]5 mins ago
> In short, unless you’re building or hosting foundation models, saying that AI is a differentiating factor is sort of like saying that your choice of database is what differentiated your SaaS product: No one cares.
I still think of AI as the analogy of databases is perfect. No database is set up for the necessary applications where they get deployed. The same is true for LLMs except for some very broad chatbot stuff where the big players already own everything.
And if AI is just these chat bots, the technology is going to be pretty minor in comparison to database technology.
didip [3 hidden]5 mins ago
Hasn't it always been the case since the beginning of AI/ML trends?
Once an algorithm/technique is discovered, it becomes a free library to install.
Data and user-base are still the moat. The traffic that ChatGPT stole from Google is the valuable part.
goatlover [3 hidden]5 mins ago
Is everything that OpenAI or these other proprietary companies do with their models known?
arminiusreturns [3 hidden]5 mins ago
No it is not, especially the filter controls, and a few other "add-ons" that are not actually core parts of LLMs. As for what they actually do with it, we don't know that either unless by leaks.
HellDunkel [3 hidden]5 mins ago
When will people realize that the use of AI art in any piece of content is almost as bad as bad typo or ads. It devalues the content and even adds a barrier to acceptance and produces the feeling that the creator does not value my time and attention.
csallen [3 hidden]5 mins ago
That depends entirely on the design of the context around the content. There is no hard-and-fast rule that AI is bad.
tartoran [3 hidden]5 mins ago
> There is no hard-and-fast rule that AI is bad.
That is true, but the context around it currently, the fast rush to appropriate freely from others and of course without contributing/crediting anything back, not to mention the sludge that has exploded all over the internet is one of agreement with the OP. I'm more than sure that AI could be used in very intelligent ways by artists themselves though, and I don't mean in a lazy way to cut corners and pump out content but a more deliberate way where the effort is visible (but I don't just mean visual arts).
outworlder [3 hidden]5 mins ago
But this piece is talking about AI. It seems fitting.
jimmaswell [3 hidden]5 mins ago
Depends. I like when it's used artistically and you're intended to notice. I've been listening to a lot of AI covers and some of them lean into the artifacts to a high degree in different ways, akin to noise music. First track here is a great example:
The moat is the 500 billion dollar investments we got along the way! (Just partly joking)
mcharawi [3 hidden]5 mins ago
This article didn't really say all too much, essentialy you can't differentiate your product with prompts alone, and you need deeper integrations with workflows, ok thats pretty clear - what else?
lompad [3 hidden]5 mins ago
Deepseek showed us very well that openAI at the very least does not have a significant moat. Also, that the ridiculous valuations dreamt up for AI companies are make-believe at best.
If - and that's a big if - LLM-Tech turns out to be the path to true (not the OpenAI-definition) AGI, everybody will be able to get there in time, the tech is well known and the research teams notoriously leaky. If not, Another AI winter is going to follow. In either case, the only ones who are going to make a major profit there are the ones selling shovels during the gold rush - nvidia. Well, and the owners who promised investors all kinds of ridiculous nonsense.
Anyway, the most important point, in my opinion, is that it's a bad idea to believe people financially incentivized to hype their AIs into unrealistic heights.
tibbar [3 hidden]5 mins ago
It seems increasingly likely that LLM development will follow the path of self-driving cars. Early on in the self-driving car race, there were many competitors building similar solutions and leaders frequently hyped full self-driving as just around the corner.
However, it turned out to be a very difficult and time-consuming process to move from a mostly-working MVP to a system that was safe across the vast majority of edge cases in the real world. Many competitors gave up because the production system took much longer than expected to build. However, today, a decade or more in to the race, self-driving cars are here.
Yet even for the winners, we can see some major concessions from the original vision: Waymo/Tesla/etc have each strictly limited the contexts you can use self-driving, so it's not a 100% replacement for a human driver in all cases, and the service itself is still very expensive to run and maintain commercially, so it's not necessarily cheaper to get a self-driving car than a human driver. Both limitations seem like to be reduced in the years ahead: the restrictions on where you can use self-driving will gradually relax, and the costs will go down. So it's plausible that fleets of self-driving cars are an everyday part of life for many people in the next decade or two.
If AI development follows this path, we can expect that many vendors will run out of cash before they can actually return their massive capital investment, and a few dedicated players will eventually produce AIs that can handle useful subsets of human thoughtwork in a decade or so, for a substantial fee. Perhaps in two decades we will actually have cost-effective AI employees in the world at large.
outworlder [3 hidden]5 mins ago
> However, today, a decade or more in to the race, self-driving cars are here.
In a limited fashion, though. We don't have generalized fully autonomous vehicles just yet.
janalsncm [3 hidden]5 mins ago
There are plenty of real applications that Nvidia is fueling. Things that make money. There will be a reckoning for the hype men, but there is a good amount of value still there.
(As always, the task of the hype man isn’t to maintain the bubble indefinitely, but just long enough for him to get his money out.)
There are more fundamental issues at play, where I see stock price fairly divorced from actual, tangible value, but the line still goes up because people are going to keep buying tulips forever, right?
It sucks because I think investing in the stock market takes away from dynamic investment in innovative startups and real R&D, and shift capital towards shiny things.
mritchie712 [3 hidden]5 mins ago
why does cursor have $100m ARR then?
I read the first one when it was posted here too and I don't get their point. It's a lot of words, but what are you trying to say?
post-it [3 hidden]5 mins ago
I think what they're saying is that Cursor makes money because it's a good editor in general that integrates AI well, not just because of the fact that it uses AI.
If you just slap a ChatGPT backend onto your product, your competitors will do it too and you gain nothing without some additional innovation.
noman-land [3 hidden]5 mins ago
Cursor without AI is just VSCode. They came up with an AI-native code crafting experience that no one else has thought of before and if you asked me how they did it I wouldn't be able to answer you.
mbesto [3 hidden]5 mins ago
(1) That's what the original author is saying. Their valuation is possibly incorrect.
(2) On the other hand, Cursor's value is essentially gluing the two things together. If your data is already in the castle (e.g. my codebase and historical context of building it over time is now in Cursor's instance of Claude) then the software is very sticky and I likely wouldn't switch to my own instance of Claude. The author also addresses this noting that "how data flows in and out" has value, which Cursor does.
physicsguy [3 hidden]5 mins ago
But how defensible is that in the market?
tonyhart7 [3 hidden]5 mins ago
but that just Cursor actually is???? strip the chatgpt integration and it is just vscode
HarHarVeryFunny [3 hidden]5 mins ago
TFA is saying that it's your product that matters, as always, and that using AI can't be your moat since everyone has access to AI.
It seems cursor did a bunch of things right, from choosing to base it on an already popular editor, to the vision and specific ways they have integrated AI, to the flexibility of which models to use. No doubt there was some "early mover" advantage too.
Certainly the AI isn't their moat since it's mostly using freely available models (although some of their own too I believe), and it remains to be seen how much of a moat of any kind or early-mover advantage they really have. The AI-assisted coding market is going to be huge, and presumably will attract a lot more competition.
I'm old enough to remember when the BRIEF (basic reconfigurable interactive editing facility) editor (by Underware) took the world by storm, but where is it now?
Any other former BRIEF users/fans out there ?!
HyprMusic [3 hidden]5 mins ago
The article heavily emphasises the point that having the "smartest" AI isn't a moat, it's the experience and integrations that build the moat. That's exactly why Cursor is more popular than Aider.
rvz [3 hidden]5 mins ago
> why does cursor have $100m ARR then?
First mover advantage.
They are not safe against Microsoft, who have the resources to copy every feature that Cursor has into VS code and can afford to offer it for "free" for a very long time and Microsoft also has access to the exact same models as Cursor.
So not only that tells you there is no moat, but offering the best tools and models for free is exactly what Microsoft's modern definition of "Extinguish" is from their EEE strategy.
sponnath [3 hidden]5 mins ago
Copilot does seem to be catching up in some areas but from my testing Cursor still has better UX. There's substantial value in the "glue" that Cursor provides, one that Microsoft has failed to replicate so far.
osigurdson [3 hidden]5 mins ago
>> First mover advantage
Cursor was released after Copilot
rendang [3 hidden]5 mins ago
Sometimes it's the first mover to a critical threshold of user value-add
osigurdson [3 hidden]5 mins ago
This re-defines "first mover" to mean anyone who is currently ahead. This dilutes meaning too much imo.
jrflowers [3 hidden]5 mins ago
This makes sense. Cursor makes money therefore they are the only AI code editor
potatoman22 [3 hidden]5 mins ago
Maybe the lesson should be: a good product can make money even without a moat.
forrestthewoods [3 hidden]5 mins ago
An alternative phrasing is “you can’t build a moat with an AI model”. Which Cursor exemplifies by way of supporting 10 different models and adding more all the time.
osigurdson [3 hidden]5 mins ago
My suggestion: don't listen to anybody. If interested, learn about how these things actually work and make your own predictions.
coliveira [3 hidden]5 mins ago
That's something these companies don't seem to understand. Any model that is smart enough to be considered a true AI is also smart enough to teach what it knows to other AI models. So the process of creating a complex AI is commoditized. It just takes another group with access to the original AI to train other models with similar knowledge.
I also believe that, just like humans, AI models will be specialized so we'll have companies creating all kinds of special purpose models that have been trained with all knowledge from particular domains and are much better in particular fields. Generic AI models cannot compete there either.
CamperBob2 [3 hidden]5 mins ago
People who say this are forgetting what it was like in the 2000s, when patent suits were flying back and forth like WWI gas shells. Once people realized that they could patent almost any old idea simply by adding "with a computer" or "on the Internet," the floodgates opened.
Rest assured, right now people are filing claims to the same old stuff, only now "with AI" tacked on. And rest assured, the rubber-stamping machine in the USPTO's basement is running 24/7 approving them.
Legend2440 [3 hidden]5 mins ago
What’s different now is that tech companies all have mutually assured destruction pacts.
Many key pieces of AI technology, like transformers, have patents. If you start trying to enforce your “…with AI” patent against Google, they’re just going to turn around and sue you for using their patented technology.
CamperBob2 [3 hidden]5 mins ago
Which works great against everybody except NPEs.
dfedbeef [3 hidden]5 mins ago
You actually can't build a lot of things with AI
jbs789 [3 hidden]5 mins ago
Solve the problem.
Start there, no matter the tool.
aqueueaqueue [3 hidden]5 mins ago
Replace AI with "compute" to see why it makes sense.
PaulHoule [3 hidden]5 mins ago
It's not differentiation when everybody else is adding AI features "just because."
PeterStuer [3 hidden]5 mins ago
Regulatory capture: "Hold my beer!"
michaelcampbell [3 hidden]5 mins ago
Yup; AI can't, but AI plus laws made by purchased lawmakers sure can.
d--b [3 hidden]5 mins ago
What’s the thing about moats?
First: AI requires an awful lot of resources, which in itself is a moat.
Second: having a moat doesn’t prevent your service to be attacked. See Tesla.
Third: not having a moat doesn’t prevent your service from dominating. See TikTok.
rendang [3 hidden]5 mins ago
Network effect, as in TikTok's case, is a big moat!
bitwize [3 hidden]5 mins ago
What we call today "AI" will replace human thought about as much as the TI-99/4A Speech Synthesizer replaced the human voice. Despite not putting talented voice actors out of work by a long shot, artificial voices have found many uses: automated announcements, weather information for aviation and maritime applications, assistive software for the blind, etc. So it will be with machine learning. Use it as a tool to augment your ability to find trends in seas of unstructured data, but the hard intellectual work you'll still need to do yourself. I wish more people would get this.
JohnMakin [3 hidden]5 mins ago
> In many ways, the whole point of AI applications is that it should feel like magic because something that you previously had to do by hand is now fully automated with believable intelligence. If you’re thinking about taking traditional forms of UX and adding AI to them, that’s an okay starting point but not a defensible moat.
No. Stop! Please! I want my UX in an app to do the damn thing I precisely intend it to do, not what it believes I intend to do - which is increasingly common in UX design - and I hate it. It's a completely opaque black box and when the "magic" doesn't work (which it frequently does not, especially if you fall outside of "normal" range) - the UX is abysmal to the point of hilarity.
tibbar [3 hidden]5 mins ago
That's fine, but there are many tasks (for example, natural-language processing) where there is no classical way to precisely define the steps you want to be performed, and hand-tuning a list of keywords or heuristics is time-consuming and less accurate than reading through the texts yourself. In this case, providing high-level instructions to an LLM to read through the text and attempt to apply the spirit of your instructions is invaluable.
solid_fuel [3 hidden]5 mins ago
Similar vein - I can't even type "2 + 2 = 5" on my iPhone anymore because the keyboard keeps suggesting "2 + 2 = 4" as soon as I finish the first part and autofills it when I hit the space. It's extremely frustrating, and I feel like a keyboard is a prime example of a system that should do what I say. If I make a mistake, I'll correct it.
ppqqrr [3 hidden]5 mins ago
It's been going downhill for a while, even before AI. The VC-funded React SaaS craze took the "put something barely functional on the web, make some subscription cash" model and scaled it to what is essentially a scam/spam industry.
So if the UX feels increasingly framed in terms of what the "developers" see/want/believe to be profitable, and less from the actual user's perspective, that's because the UX was sketched by hustlers who see software development explicitly as a "React grind to increase MRR/ARR."
skydhash [3 hidden]5 mins ago
I think React is nice if you actually have a state-heavy application (maps, video player, editors,...) to build and the web is OK as a distribution platforms. But most web applications is just data display and form posting. I'm still disappointed with GitHub going full SPA.
fijiaarone [3 hidden]5 mins ago
what on earth could be state heavy about a video player? timestamps? thumbnails?
tibbar [3 hidden]5 mins ago
I mean, a video player is actually quite stateful. Buffering video data and then decoding delta-changes to get the current image, while adapting to varying connection speeds, external CDNs, and syncing with audio, is a lot! Not counting user interactions like seeking, volume, full-screen, etc. Web browsers have a built-in player that would do a lot of this for you now, but if you were Netflix and wanted a really good player you'd probably customize a lot of this.
Granted, React would not be too helpful with the core video player engine, but actual video apps have lots of other features like comments, preview the next videos, behind-the-scenes notes, etc.
And then, if it's two-way video, you have a whole new level of state to track, and you'd definitely need to roll your own to make a good one.
Etheryte [3 hidden]5 mins ago
This is a good summary of the downturn of Google search. It constantly tries to offer you the kind of results it thinks you should want, not what you actually searched for. Thank god for Kagi and other up and coming alternatives.
potatoman22 [3 hidden]5 mins ago
I don't see integrating AI into UI/UX to be incompatible with enabling the user to do as she intends. In fact, I think the thoughtful use of AI could better help users do what they want. It is annoying when "smart" features completely miss the mark, but it's a worthy pursuit in my book.
tshaddox [3 hidden]5 mins ago
> I want my UX in an app to do the damn thing I precisely intend it to do, not what it believes I intend to do
This is pretty weird epistemological phrasing. It's a bit like saying "I want to know the truth, not just what I believe to be the truth!"
mrob [3 hidden]5 mins ago
It's just stating a preference for deterministic behavior, i.e. tools not agents. Once you've learned how to use a tool it will reliably do what you intend it to do. With agents, whether human or AI, less initial learning is required but there's always a layer of indirection where errors can arise. The skill ceiling is lower.
alwa [3 hidden]5 mins ago
Or maybe, “I want to know the truth, not just what you believe I’d like to hear as truth
Which seems… pretty reasonable to me. Both involve the other party substituting some vaguely patronizing judgment of theirs for the first party’s.
achierius [3 hidden]5 mins ago
But isn't that generally true? That's why anyone bothers to question their existing beliefs -- because they prefer "the truth" over "what [they] believe to be true". Not everyone does, of course, but everyone did just value "what [they] believe to be true" then any sort of self-reflection on belief would be a strict net-negative!
voidhorse [3 hidden]5 mins ago
I think the "the user doesn't really know what they want" idea has been taken a bit too far.
Basically all applications these days are like this. Rather than assume users are sentient, intelligent beings capable of controlling devices and applications in order to achieve some goal, modern app design seems to be driven by a philosophy which views the operators of applications as imbeciles that require constant hand-holding and who must be given as little control and autonomy as possible. The analog world becomes more appealing every day because of this.
fijiaarone [3 hidden]5 mins ago
"They" have been working for decades to turn the computer back into a TV. And they've mostly succeeded. People don't use the internet, they consume social media algorithms. And they don't play video games, they watch cut-scenes with animated button clicks in between.
chefandy [3 hidden]5 mins ago
Developers expect UI controls to afford fine-grained control over functions as if they’re a thin wrapper for the API. For example, if there’s some condition that prevents something from happening in a program and there are one or two things you can do to mitigate it, most developers would rather know the process failed and be given the choice about how/when to proceed. Our understanding of what controls ‘should’ do based on a sophisticated mental model of software architecture, networking, etc. We don’t have to deliberately reason about what the computer is doing— we just intuit that it’s writing a file, or making a web request, etc etc etc and automatically reason about the appropriate next steps based on what happens during that operation— sort of like writing code. Knowing that the process failed and how gives us valuable information that we can use to troubleshoot the base problem and possibly improve something.
Nontechnical users do not have that mental model: they base their estimation of what a control should do on what problem they believe that control solves. The discrepancy starts with misalignment between the user’s mental model of the problem, and how software solves the problems. In that hypothetical system where some condition is preventing something from happening and there are one or two things you can do to mitigate it, the nontechnical user doesn’t give a fuck if and how something failed if there’s a different possible approach won’t fail. They just want you to solve their problem with the least possible amount of resistance, and don’t have the requisite knowledge to know why the program is telling them, let alone how it relates to the larger problem. That’s why developers often find UIs built for nontechnical users to be frustrating and seem “dumbed down”. For users concerned only with the larger problem and have no understanding of the implementation, giving them a bunch of implementation-specific controls is far dumber than trying to pivot if there’s a stumbling block in the execution and still try to do what needs to be done without user intervention. Moreover, even having that big mess of controls on the screen for more technically-sophisticated users increasing cognitive load and makes it more difficult for nontechnical users to figure out what they need to do.
It’s a frustrating disconnect, but it’s not some big trend to make terrible UIs as developers often assume. Rather, it’s becoming more common because UI and UX designers are increasingly figuring out what the majority of users actually want the software to do, and how to make it easier for them to do it. When developers are left to their own devices with interfaces, the result is frequently something other developers approve of, while nontechnical users find it clunky and counterintuitive. In my experience, that’s why so few nontechnical users adopt independent user-facing FOSS applications instead of their commercial counterparts.
m3kw9 [3 hidden]5 mins ago
Acceleration is moat with AI, you get to the point where it self improve first and you create new tech that keep your lead
dmitrygr [3 hidden]5 mins ago
So far, that is science fiction, and there is no reason to expect a change there.
apwell23 [3 hidden]5 mins ago
Is this me or is AI stuff turning out to be really stupid and cringe like this
superbowl salesforce Ad that friend shared with me to get my comments. I still have no idea wtf this is or what AI has to do with it.
I feel like every enterprise ad or marketing site is like this. I have no idea what it does, but it says it'll make numbers go up, so execs buy. It must work to a degree, because it's so common.
fullshark [3 hidden]5 mins ago
The problem is there’s no consumer uses for it, it’s all enterprise.
askafriend [3 hidden]5 mins ago
Salesforce is the culprit here, not AI.
apwell23 [3 hidden]5 mins ago
is there a good AI product Ad ?
fijiaarone [3 hidden]5 mins ago
Alright alright alright...you've got some Earnst on you.
Dig1t [3 hidden]5 mins ago
It’s honestly the best type of technology.
It’s something that everyone has to implement because their products will be inferior without it. But it’s not something you can use to build a monopoly easily, and since everyone has to do it there will be many people racing to the bottom pushing the price down.
There's a good essay from Andrew Chen on this topic: Revenge of the GPT Wrappers: Defensibility in a world of commoditized AI models
"Network effects are what defended consumer products, in particular, but we will also see moats develop from the same places they came from the past decades: B2B-specific moats (workflow, compliance, security, etc), brand/UX, growth/distribution advantages, proprietary data, etc etc." [1]
Also check out the podcast with the team at Cursor/Anysphere for details into how they integrate models into workflows [2]
[1] https://andrewchen.substack.com/p/revenge-of-the-gpt-wrapper...
[2] https://www.youtube.com/watch?v=oFfVt3S51T4&t=1398s
Moats are the logistic network that Amazon has.. ok spend $10bn over 5 years and then come at me - if I didn't sit still...
Moats are what Google has in advertising... ok, pull 3% of the market for more money than god and see if it works..
brand/ux is not a moat, it's table stakes.
1. Status symbols - my Lambo signifies that my disposable income is greater than your disposable income
2. Fan clubs - I buy Nikes because they do a better job at promoting great athleticism, and an iPhone to pay double for hardware from 3 years ago
3. Visibility bias - As a late adopter I use whatever the category leader is (i.e. ChatGPT = AI, Facebook = the Internet)
What you describe sounds more like market power resulting from a monopoly
Technology enables UX. When the underlying technology is commodity—which is often the case—it's easy for competitors to copy the UX. But sometimes UX arises from the tight marriage of design and proprietary technology.
Good UX also arises from good organization design and culture, which aren't easy to copy. Think about a good customer support experience where the first agent you talk with is empowered to solve your issue on the spot, or there's perfect handoff between agents where each one has full context of your customer issue so you don't have to repeat yourself.
Except for the technical advantage of M-series macs that's like all of the Apple moat. Apple brand and UX is what is selling the hardware.
They make the UX depend on the number of Apple devices you have, so a little bit of network effect. But that's mostly still UX.
If your visibility on current state of AI is limited to hallucinogenic LLM prompts -- it's worth digging a bit deeper, there is a lot going on right now
https://news.ycombinator.com/item?id=43115079
ML generally is for pattern recognition in data. That includes anomaly detection in financial data, for example. It is used in fraud detection.
Image ML/AI is used in your phone's facial recognition, in various image filtering and analysis algorithms in your phone's camera to improve picture quality or allow you to edit images to make them look better (to your taste, anyway).
AI image recognition is used to find missing children by analysing child pornography without requiring human reviewers to trawl through it - they can just check the much fewer flagged images.
AI can be used to generate captions on videos for the deaf or in text to speech for the blind.
There are tons of uses of AI/ML. Another example: video game AI. Video game upscaling. Chess and Go AI: NNUE makes Chess AI far stronger and in really cool creative ways which have changed high level chess and made it less drawish.
And even if there was, the fast follower to the Bitter Lesson is the Lottery Ticket Hypothesis: if you build a huge, general model to perform my task, I can quickly distill and fine tune it to be better, cheaper, and faster.
"better" = "smaller, more specialized, domain-specific"
1) Lots of players enter at the start because there are no giant walled gardens yet. 2) Being best in class will require greater and greater capex (like new process nodes) as things progress. 3) New classes of products will be enabled over time depending on how performance improves.
There is more there, but, with regard to this post, what I want to point out is that CPUs were pretty basic commodities in the beginning, and it was only as their complexity and scale exploded that margins improved and moats were possible. I think it will play out similarly with AI. The field will narrow as efficiency and performance improves, and moats will become possible at that point.
I still think of AI as the analogy of databases is perfect. No database is set up for the necessary applications where they get deployed. The same is true for LLMs except for some very broad chatbot stuff where the big players already own everything.
And if AI is just these chat bots, the technology is going to be pretty minor in comparison to database technology.
Once an algorithm/technique is discovered, it becomes a free library to install.
Data and user-base are still the moat. The traffic that ChatGPT stole from Google is the valuable part.
That is true, but the context around it currently, the fast rush to appropriate freely from others and of course without contributing/crediting anything back, not to mention the sludge that has exploded all over the internet is one of agreement with the OP. I'm more than sure that AI could be used in very intelligent ways by artists themselves though, and I don't mean in a lazy way to cut corners and pump out content but a more deliberate way where the effort is visible (but I don't just mean visual arts).
https://youtu.be/HgfsKS-Ux_A
If - and that's a big if - LLM-Tech turns out to be the path to true (not the OpenAI-definition) AGI, everybody will be able to get there in time, the tech is well known and the research teams notoriously leaky. If not, Another AI winter is going to follow. In either case, the only ones who are going to make a major profit there are the ones selling shovels during the gold rush - nvidia. Well, and the owners who promised investors all kinds of ridiculous nonsense.
Anyway, the most important point, in my opinion, is that it's a bad idea to believe people financially incentivized to hype their AIs into unrealistic heights.
However, it turned out to be a very difficult and time-consuming process to move from a mostly-working MVP to a system that was safe across the vast majority of edge cases in the real world. Many competitors gave up because the production system took much longer than expected to build. However, today, a decade or more in to the race, self-driving cars are here.
Yet even for the winners, we can see some major concessions from the original vision: Waymo/Tesla/etc have each strictly limited the contexts you can use self-driving, so it's not a 100% replacement for a human driver in all cases, and the service itself is still very expensive to run and maintain commercially, so it's not necessarily cheaper to get a self-driving car than a human driver. Both limitations seem like to be reduced in the years ahead: the restrictions on where you can use self-driving will gradually relax, and the costs will go down. So it's plausible that fleets of self-driving cars are an everyday part of life for many people in the next decade or two.
If AI development follows this path, we can expect that many vendors will run out of cash before they can actually return their massive capital investment, and a few dedicated players will eventually produce AIs that can handle useful subsets of human thoughtwork in a decade or so, for a substantial fee. Perhaps in two decades we will actually have cost-effective AI employees in the world at large.
In a limited fashion, though. We don't have generalized fully autonomous vehicles just yet.
(As always, the task of the hype man isn’t to maintain the bubble indefinitely, but just long enough for him to get his money out.)
There are more fundamental issues at play, where I see stock price fairly divorced from actual, tangible value, but the line still goes up because people are going to keep buying tulips forever, right?
It sucks because I think investing in the stock market takes away from dynamic investment in innovative startups and real R&D, and shift capital towards shiny things.
I read the first one when it was posted here too and I don't get their point. It's a lot of words, but what are you trying to say?
If you just slap a ChatGPT backend onto your product, your competitors will do it too and you gain nothing without some additional innovation.
(2) On the other hand, Cursor's value is essentially gluing the two things together. If your data is already in the castle (e.g. my codebase and historical context of building it over time is now in Cursor's instance of Claude) then the software is very sticky and I likely wouldn't switch to my own instance of Claude. The author also addresses this noting that "how data flows in and out" has value, which Cursor does.
It seems cursor did a bunch of things right, from choosing to base it on an already popular editor, to the vision and specific ways they have integrated AI, to the flexibility of which models to use. No doubt there was some "early mover" advantage too.
Certainly the AI isn't their moat since it's mostly using freely available models (although some of their own too I believe), and it remains to be seen how much of a moat of any kind or early-mover advantage they really have. The AI-assisted coding market is going to be huge, and presumably will attract a lot more competition.
I'm old enough to remember when the BRIEF (basic reconfigurable interactive editing facility) editor (by Underware) took the world by storm, but where is it now?
Any other former BRIEF users/fans out there ?!
First mover advantage.
They are not safe against Microsoft, who have the resources to copy every feature that Cursor has into VS code and can afford to offer it for "free" for a very long time and Microsoft also has access to the exact same models as Cursor.
So not only that tells you there is no moat, but offering the best tools and models for free is exactly what Microsoft's modern definition of "Extinguish" is from their EEE strategy.
Cursor was released after Copilot
I also believe that, just like humans, AI models will be specialized so we'll have companies creating all kinds of special purpose models that have been trained with all knowledge from particular domains and are much better in particular fields. Generic AI models cannot compete there either.
Rest assured, right now people are filing claims to the same old stuff, only now "with AI" tacked on. And rest assured, the rubber-stamping machine in the USPTO's basement is running 24/7 approving them.
Many key pieces of AI technology, like transformers, have patents. If you start trying to enforce your “…with AI” patent against Google, they’re just going to turn around and sue you for using their patented technology.
Start there, no matter the tool.
First: AI requires an awful lot of resources, which in itself is a moat.
Second: having a moat doesn’t prevent your service to be attacked. See Tesla.
Third: not having a moat doesn’t prevent your service from dominating. See TikTok.
No. Stop! Please! I want my UX in an app to do the damn thing I precisely intend it to do, not what it believes I intend to do - which is increasingly common in UX design - and I hate it. It's a completely opaque black box and when the "magic" doesn't work (which it frequently does not, especially if you fall outside of "normal" range) - the UX is abysmal to the point of hilarity.
So if the UX feels increasingly framed in terms of what the "developers" see/want/believe to be profitable, and less from the actual user's perspective, that's because the UX was sketched by hustlers who see software development explicitly as a "React grind to increase MRR/ARR."
Granted, React would not be too helpful with the core video player engine, but actual video apps have lots of other features like comments, preview the next videos, behind-the-scenes notes, etc.
And then, if it's two-way video, you have a whole new level of state to track, and you'd definitely need to roll your own to make a good one.
This is pretty weird epistemological phrasing. It's a bit like saying "I want to know the truth, not just what I believe to be the truth!"
Which seems… pretty reasonable to me. Both involve the other party substituting some vaguely patronizing judgment of theirs for the first party’s.
Basically all applications these days are like this. Rather than assume users are sentient, intelligent beings capable of controlling devices and applications in order to achieve some goal, modern app design seems to be driven by a philosophy which views the operators of applications as imbeciles that require constant hand-holding and who must be given as little control and autonomy as possible. The analog world becomes more appealing every day because of this.
Nontechnical users do not have that mental model: they base their estimation of what a control should do on what problem they believe that control solves. The discrepancy starts with misalignment between the user’s mental model of the problem, and how software solves the problems. In that hypothetical system where some condition is preventing something from happening and there are one or two things you can do to mitigate it, the nontechnical user doesn’t give a fuck if and how something failed if there’s a different possible approach won’t fail. They just want you to solve their problem with the least possible amount of resistance, and don’t have the requisite knowledge to know why the program is telling them, let alone how it relates to the larger problem. That’s why developers often find UIs built for nontechnical users to be frustrating and seem “dumbed down”. For users concerned only with the larger problem and have no understanding of the implementation, giving them a bunch of implementation-specific controls is far dumber than trying to pivot if there’s a stumbling block in the execution and still try to do what needs to be done without user intervention. Moreover, even having that big mess of controls on the screen for more technically-sophisticated users increasing cognitive load and makes it more difficult for nontechnical users to figure out what they need to do.
It’s a frustrating disconnect, but it’s not some big trend to make terrible UIs as developers often assume. Rather, it’s becoming more common because UI and UX designers are increasingly figuring out what the majority of users actually want the software to do, and how to make it easier for them to do it. When developers are left to their own devices with interfaces, the result is frequently something other developers approve of, while nontechnical users find it clunky and counterintuitive. In my experience, that’s why so few nontechnical users adopt independent user-facing FOSS applications instead of their commercial counterparts.
superbowl salesforce Ad that friend shared with me to get my comments. I still have no idea wtf this is or what AI has to do with it.
https://www.youtube.com/watch?v=rcLAeURXvHY
It’s something that everyone has to implement because their products will be inferior without it. But it’s not something you can use to build a monopoly easily, and since everyone has to do it there will be many people racing to the bottom pushing the price down.