Sheesh, I didn't expect my post to go viral. Little explanation:
I downloaded and run Cursor for the first time when this "error" happened. Turned out I was supposed to use agent instead of inline Cmd+K command because inline has some limitations while agent not so much.
Nevertheless, I was surprised that AI could actually say something like that so just in case I screenshotted it - some might think it's fake, but it's actually real and makes me think if in future AI will start giving attitudes to their users. Oh, welp. For sure I didn't expect it to blow up like this since it was all new to me so I thought it maybe was an easter egg or just a silly error. Turned out it wasn't seen before so there we are!
I also had a fun cursor bug where the inline generation got stuck in a loop and generated a repeating list of markdown bulletpoints for several hundred lines until it decided to give it a break.
As a pretty advanced sd user, I can draw some parallels (but won’t claim there’s a real connection).
Sometimes you get a composition from the specific prompt+seed+etc. And it has an alien element that is surprisingly stable and consistent throughout “settings wiggling”. I guess it just happens that training may smear some ideas across some cross-sections of the “latent space”, which may not be that explicit in the datasets. It’s a hyper-dimensional space after all and not everything that it contains can be perceived verbatim in the training set (hence the generation capabilities, afaiu). A similar thing can be seen in sd lora training, where you get some “in” idea to be transformed into something different, often barely interepretable, but still stable in generations. While you clearly see the input data and know that there’s no X there, you sort of understand what the precursor is after a few captioning/training sessions and the general experience. (How much you can sense in the “AI” routine but cannot clearly express is another big topic. I sort of like this peeking into the “latent unknown” which skips the language and sort of explodes in a mute vague understanding of things you’ll never fully articulate. As if you hit the limits of language and that is constraining. I wonder what would happen if we broke through this natural barrier somehow and became more LLMy rather than the opposite). /sot
I think the inline command pallete likely ran into an internal error making it be unable to generate, and then its "come up with a message telling the user we can't do this" generation got StackOverflow'd.
There’s always some background enthropy on a forum, just nevermind and fix it if you feel like. People fat finger “flag” all the time as well. https://news.ycombinator.com/flagged
This isn’t just about individual laziness—it’s a systemic arms race towards intellectual decay.[0]
With programming, the same basic tension exists as with the more effective smarter AI-enhanced approaches to conceptual learning: effectiveness is a function of effort, and the whole reason for the "AI epidemic" is that people are avoiding effort like the plague.
So the problem seems to boil down to, how can we convince everyone to go against the basic human (animal?) instinct to take the path of least resistance.
So it seems to be less about specific techniques and technologies, and more about a basic approach to life itself?
In terms of integrating that approach into your actual work (so you can stay sharp through your career), it's even worse than just laziness, it's fear of getting fired too, since doing things the human way doubles the time required (according to Microsoft), and adding little AI-tutor-guided coding challenges to enhance your understanding along the way increases that even further.
And in the context of "this feature needs to be done by Tuesday and all my colleagues are working 2-3x faster than me (because they're letting AI do all the work)... you see what I mean! It systemically creates the incentive for everyone to let their cognitive abilities rapidly decline.
> With programming, the same basic tension exists as with the more effective smarter AI-enhanced approaches to conceptual learning: effectiveness is a function of effort, and the whole reason for the "AI epidemic" is that people are avoiding effort like the plague.
That's not true. Yes, you can get more effective at something with more effort, but you can get even more effective at it if you find a way to get results without actually doing the work yourself in the first place.
That's literally the entirety of human technological advancement, in a nutshell. We'd ideally avoid all effort that's incidental to the goal, but if we can't (we usually can't), we invent tools that reduce this effort - and iterate on them, reducing the effort further, until eventually, hopefully, the human effort goes to 0.
Is that a "basic human (animal?) instinct to take the path of least resistance"? Perhaps. But then, that's progress, not a problem.
There's actually two things going on when I'm coding at work:
1) I'm writing the business code my company needs to enable a feature/fix a bug/etc.
2) I'm getting better as a programmer and as someone who understands our system, so that #1 happens faster and is more reliable next time.
Using AI codegen can (arguably, the jury is still out on this if we include total costs, not just the costs of this one PR) help with #1. But it _appreciably bad_ at #2.
In fact, the closest parallel for #2 I can think of us plagiarism in an academic setting. You didn't do the work, which means you didn't actually learn the material, which means this isn't actually progress (assuming we continue to value #2, and, again, arguably #1 as well), it is just a problem in disguise.
> In fact, the closest parallel for #2 I can think of us plagiarism in an academic setting. You didn't do the work, which means you didn't actually learn the material, which means this isn't actually progress (assuming we continue to value #2 (...)
Plagiarism is a great analogy. It's great because what we call plagiarism in academic setting, in the real world work we call collaboration, cooperation, standing on the shoulders of giants, not reinventing the wheel, getting shit done, and such.
So the question really is how much we value #2? Or, which aspects of it we value, because I see at least two:
A) "I'm getting better as a programmer"
B) "I'm getting better as a someone who understands our system"
As much as I hate it, the brutal truth is, you and me are not being paid for A). The business doesn't care, and they only get so much value out of it anyway. As for B), it's tricky to say whether and when we should care - most software is throwaway, and the only thing that happens faster than that is programmers working on it changing jobs. Long-term, B) has very little value; short-term, it might benefit both business and the programmer (the latter by virtue of making the job more pleasant).
I think the jury is still out on how LLMs affect A). I feel that it's not making me dumber as a programmer, but I'm from the cohort of people with more than a decade of programming experience before even touching a language model, so I have a different style of working with LLMs than people who started using them with less experience, or people who never learned to code without them.
The CTO asked the CEO what happens if we train these people and they decide to leave? The CEO asked in reply what happens if we don't train the people and they decide to stay?
Any reasonably smart company will invest in its employees. They should absolutely be training you to be better programmers while you are on the job. Only shit companies would refuse to do so.
> Any reasonably smart company will invest in its employees. They should absolutely be training you to be better programmers while you are on the job. Only shit companies would refuse to do so.
The latter represents a mindset that's prevalent at a large portion of companies. Most companies aren't FANNGS or AAA Game Studios (w/e) looking for the best of the best, most companies are outsourcing a large portion of work and/or are speeding to the bottom of the quality race. Many aren't even in any position to judge competence, nurture it, or reward it.
They just want "5 years experience" in whatever Cloud crap and Java thing.
> Any reasonably smart company will invest in its employees. They should absolutely be training you to be better programmers while you are on the job. Only shit companies would refuse to do so.
That's true, but by the same token, it's also true the world is mostly made of shit companies.
Companies are just following local gradient they perceive. Some can afford to train people - mostly ones without much competition (and/or big enough to have a buffer against competitors and market fluctuations). Most can't (or at least think they can't), and so don't.
There's a saying attributed to the late founder of an IT corporation in my country - "any specialist can be replaced by a finite amount of students". This may not be true in a useful fashion[0], but the sentiment feels accurate. The market seems to be showing that for most of the industry, fresh junior coders are at the equilibrium: skilled enough to do the job[1], plentiful enough to be cheap to hire - cheap enough that it makes more sense to push seniors away from coding and towards management (er, "mentoring new juniors").
In short, for the past decade or more, the market was strongly suggesting that it's cheaper to perpetually hire and replace juniors than to train up and retain expertise.
--
[0] - However, with LLMs getting better than students, this has an interesting and more worrying corollary: "any specialist can be replaced by a finite amount of LLM spend".
[1] - However shitty. But, as I said, most software is throwaway. Self-reinforcing loop? Probably. But it is what it is.
This reminds me of a quote from dr. House. In one of the episodes with the smart girl that also studied mathematics (I don't remember her name), Cuddy said something like "You will figure this out, the sum of your IQs is over X". To which House replied, "The same applies to a group of four stupid people".
Sometimes knowledge and experience aren't additive: if none of the students had a certain experience/knows a certain fact, the sum of the students will still not have that experience or not know the fact.
I think it's more accurate to say that the company doesn't need as many people with 20+ years of experience but lower energy and as attention to commit to the company or demand higher pay, vs people with 5-20 years of experience and youthful energy and fewer external commitments.
This is especially true now that senior employees have gotten more demanding about wanting to see theit children, which didn't happen at scale in the past.
>
Any reasonably smart company will invest in its employees. They should absolutely be training you to be better programmers while you are on the job. Only shit companies would refuse to do so.
I rather see the problem in US culture: in many other countries switching the company every few years is considered to be a sign of low loyalty to the company, thus a red flag in a job application.
> It's great because what we call plagiarism in academic setting, in the real world work we call collaboration, cooperation, standing on the shoulders of giants, not reinventing the wheel, getting shit done, and such.
That is absolutely not true. A much closer analogue of industry collaboration in the academic setting would be cross-university programs and peer review. The actual analogue of plagiarism is taking work of another and regurgitating it with small or no changes at all while not providing any sources. I'm sure you see what this sounds more akin to.
You're half right. Reuse is one half of plagiarism. But it's not the crucial thing which gives meaning to the word: lack of attribution and passing it off as your own. This is the moral wrong. It would be wrong in a work context too, taking credit for the work of others.
>
Plagiarism is a great analogy. It's great because what we call plagiarism in academic setting, in the real world work we call collaboration, cooperation, standing on the shoulders of giants, not reinventing the wheel, getting shit done, and such.
In academia, if a result is quoted, this is called called "scholarship". If, on the other hand, you simply copy-paste something without giving a proper reference, it's "plagiarism".
Which games? I'd argue this isn't true for AAA games, where you see major firms milking the big releases for a decade in some cases--3 versions of the Last of Us, GTA V just had a "enhanced" patch thing on PC etc.
> That's not true. Yes, you can get more effective at something with more effort, but you can get even more effective at it if you find a way to get results without actually doing the work yourself in the first place.
This has been going on for decades. It is called outsourcing development. We've just previously passed the work to people in countries with lower wages and conditions, now people are increasingly passing the work to a predictive text engine (either directly, or because that is what the people they are outsourcing to are doing).
Personally I'm avoiding the “AI” stuff¹, partly because I don't agree with how most of the models were trained, partly because I don't want to be beholden to an extra 3rd party commercial entity to be able to do my job, but mainly because I tinker with tech because I enjoy it. Same reason I've always avoided management: I want to do things, not tell others to do things. If it gets to the point that I can't work in the industry without getting someone/something else to do the job for me, I'll reconsider management or go stack shelves!
--------
[1] It is irritating that we call it AI now because the term ML became unfashionable. Everyone seems to think we've magically jumped from ML into _real_ AI which we are having to make sure we call something else (AGI usually) to differentiate reasoning from glorified predictive text.
> Everyone seems to think we've magically jumped from ML into _real_ AI
But we have:
- LLMs pass the Turing test more convincingly than anything before
- LLMs are vastly more popular than any AI/ML methods before it
- "But they don't reason!" -- we're advancing chain of thought
The Turing test is meant to see if computers can functionally replicate human thought (just focussing on textual conversations aspect for simplicity).
The implications of passing the Turing Test are profound. Tricking a user with a 5 minute conversation isn't the same. The examiner is allowed to be an expert questionner.
For one thing, software engineering would quickly be taken over as a client can just chat with an AI to get the code. That is far from happening, even with all the recent LLM advances.
Similarly for literature, science etc. Currently, you can detect a difference between a competent human & machine in all these fields just by text chat. Another way of saying this is that the Turing Test is AI-complete and is a test for AGI.
> LLMs pass the Turing test more convincingly than anything before
Being able to converse (or convince someone the conversation is natural) is not the same as being able to perform a technical task well. The same applies to humans as well, of course, which perhaps brings us back to the outsourcing comparison — how often does that work out great or not?
> LLMs are vastly more popular than any AI/ML methods before it
Popular is not what I'm looking for. I've seen the decisions the general public make elsewhere, I'm not letting them chose how I do my job :)
> "But they don't reason!" -- we're advancing chain of thought
Call me back when we've advanced chain of thought much more than is currently apparent, and again: for being correct in technical matters, not conversation.
The Turing test was never a deep thesis. It was an offhand illustration of the challenge facing AI efforts, citing an example of something that was clearly impossible with technology of the day.
It's ironic to see people say this type of things and not think about old software engineer practices that are now obsolete because overtime we have created more and more tools to simplify the craft. This is yet another step in that evolution. We are no longer using punch cards or writing assembly code, and we might not write actual code in the future anymore and just instruct ais to achieve goals. This is progress
> We are no longer using punch cards or writing assembly code
I have done some romhacks, so I have seen what compilers have done to assembly quality and readability. When I hear programmers complain that having to debug AI written code is harder than just writing it yourself, that's probably exactly how assembly coders felt when they saw what compilers produce.
One can regret the loss of elegance and beauty while accepting the economic inevitability.
Not just elegance and beauty, but also functionality. AI is as victim as humans are that if you put all your maximum wit into coding, you won't have any headroom for debugging.
The handful of people writing your compilers, JIT-ers, etc. are still writing assembly code. There are probably more of them today than at any time in the past and they are who enable both us and LLMs to write high level code. That a larger profession sprang up founded on them simplifying coding enough for the average coder to be productive didn't eliminate them.
The value of most of us as coders will drop to zero but their skills will remain valuable for the foreseeable future. LLMs can't parrot out what's not in their training set.
well the only issue I have with that is that coding is already a fairly easy way to encode logic. Sure...writing Rust or C isn't easy but writing some memory managed code is so easy that I wonder whether we are helping ourselves by removing that much thinking from our lives. It's not quite the same optimization as building a machine so that we don't have to carry heavy stones ourselves. Now we are building a machine so we don't have to do heavy thinking ourselves. This isn't even specific to coding, lawyers essentially also encode logic into text form. What if lawyers in the future increasingly just don't bother understanding laws and just let an AI form the arguments?
I think there is a difference here for the future of humanity that has never happened before in our tool making history.
> Now we are building a machine so we don't have to do heavy thinking ourselves.
There are a lot of innovations that helped us not do heavy thinking ourselves. Think calculators. We will just move to a higher level of magnitud problem to solve, software development is a means to an end, instead of thinking hard about coding we should be thinking hard about the problem being solved. That will be the future of the craft.
Calculators are a good example of where letting too much knowledge slip can be an issue. So many are made by people with no grasp of order of operations or choosing data types. They could look it up, but they don't know they need to.
It's one of those problems that seems easy, but isn't. The issue seems to come out when we let an aid for process replace gaining the knowledge behind the process. You at least need to know what you don't know so you can develop an intuition for when outputs don't (or might not) make sense.
this is not progress. this is regression. who is going to maintain and further develop the software if not actual programmers? in the end the LLM's stop getting new information to be trained on, and it cant truly innovate (since its not an AGI)
Things like memory safe languages and JS DOM managed frameworks are limited scoped solved problems for most business computing needs outside of some very marginal edge cases.
AI generated code? That seems a way off from being a generalized solved problem in an iterative SDLC at a modern tech company trying to get leaner, disrupt markets, and survive in a complex world. I for one am very much in support of it for engineers with the unaided experience under their belt to judge the output, but the idea that we're potentially going to train new devs at unfamiliar companies on this stuff? Yikes.
Elsewhere in this discussion thread[0], 'ChrisMarshallNY compares this to feelings of insecurity:
> It’s really about basic human personal insecurity, and we all have that, to some degree. Getting around it, is a big part of growing up (...)
I believe he's right.
It makes me think back to my teenage years, when I first learned to program because I wanted to make games. Within the amateur gamedev community, we had this habit of sneering at "clickers" - Klik&Play & other kinds of software we'd today call "low-code", that let you make games with very little code (almost entirely game logic, and most of it "clicked out" in GUI), and near-zero effort on the incidental aspects like graphics, audio, asset management, etc. We were all making (or pretending to make) games within scope of those "clickers", but using such tools felt like cheating compared to doing it The Right Way, slinging C++ through blood, sweat and tears.
It took me over a decade to finally realize how stupid that perspective was. Sure, I've learned a lot; a good chunk of my career skills date back to those years. However, whatever technical arguments we levied against "clickers", most of them were bullshit. In reality, this was us trying to feel better, special, doing things The Hard Way, instead of "taking shortcuts" like those lazy people... who, unlike us, actually released some playable games.
I hear echoes of this mindset in a lot of "LLMs will rot your brain" commentary these days.
Insecurity is not just a part of growing up, it's a part of growing old as well, a feeling that our skills and knowledge will become increasingly useless as our technologies advance.
Humans are tool users. It is very difficult to pick a point in time and say, "it was here that we crossed the Rubicon". Was it the advent of spoken word? Written word? Fire? The wheel? The horse? The axe? Or in more modern times, the automobile, calculator, or dare I say the computer and the internet?
"With the arrival of electric technology, man has extended, or set outside himself, a live model of the central nervous system itself. To the degree that this is so, it is a development that suggests a desperate suicidal autoamputation, as if the central nervous system could no longer depend on the physical organs to be protective buffers against the slings and arrows of outrageous mechanism."
― Marshall McLuhan, Understanding Media: The Extensions of Man
Are we at battle with LLMs or with humanity itself?
Progress is more than just simplistic effort reduction. The attitude of more efficient technology = always good is why society is quickly descending into a high-tech dystopia.
It's business, not technology, that makes us descend into a high-tech dystopia. Technology doesn't bring itself into existence or into market - at every point, there are people with malicious intent on their mind - people who decide to commit their resources to commission development of new technology, or to retask existing technology, specifically to serve their malicious goals.
Note: I'm not saying this is a conspiracy or a cabal - it's a local phenomenon happening everywhere. Lots of people with malicious intent, and even more indifferent to the fate of others, get to make decisions hurting others, and hide behind LLCs and behind tech and technologists, all of which get blamed instead.
The progress does not come from the absense of effort. That is just a transparent self serving "greed is good" class of argument from the lazy. It merely employs enough cherry picked truth to sound valid.
The progress comes from amplification of effot, which comes from leverage (output comes from input), not magic (output comes from nowhere) or theft (output comes from someone else).
> Yes, you can get more effective at something with more effort, but you can get even more effective at it if you find a way to get results without actually doing the work yourself in the first place.
You've arrived at the reason for compilers. You could go produce all that pesky machine code yourself or you could learn to use the compiler to its optimal potential to do the work for you.
LLMs and smart machines alike will be the same concept but potentially capable of a wider variety of tasks. Engineers that know how to wield them and check their work will see their productivity increase as the technology gets better. Engineers that don't know how to check their work or wield them will at worst get less done or produce volumes of garbage work.
Agreed. Most of us aren’t washing our clothes with a washboard, yet the washboard not long ago was a timesaver. Technology evolves.
Now, if AI rises up against the government and Cursor becomes outlawed, then maybe your leet coder skills will matter again.
But when a catastrophic solar storm takes out the grid, a washboard may be more useful, to give you something to occupy yourself with while the hard spaghetti of your dwindling food supply slowly absorbs tepid rainwater, and you wish that you’d actually moved to Siberia and learned to live off the land as you once considered while handling a frustrating bug in production that you could’ve solved with Claude.
To clarify, what I meant with that line was how to use AI in ways that strengthen your knowledge and skills rather than weaken them. This seems to be a function of effort (see the Testing Effect), there doesn't seem to be any way around that?
Whereas what you're responding to is using AI to do the work for you, which weakens your own faculties the more you do it, right?
> [..] but you can get even more effective at it if you find a way to get results without actually doing the work yourself in the first place.
No. In learning, there is no substitution to practice. Once you jump to the conclusion, you stop understanding the "why".
Throughout various stages of technological advancement, we've come with tools to help relieve us of tedious efforts, because we understood the "why", the underlying reason of what the tools were helping us solve in the first place. Those that understood the "why" could build upon those tools to further advance civilization. The others were left behind to parrot about something they did not understand.
But -and, ironically in this case- as with most things in life, it is about the journey and not the destination. And in the grand scheme of things, it doesn't really matter if we take a more efficient road, or a less efficient road, to reach a life lesson.
like players can optimise all the fun out of a game, companies can optimise all competency out of workers.
while both are a consequence of a natural or logical tendency, neither is good. Once a player optimises all the fun out of a game, they stop playing it, often without experiencing it in its entirety (which I would regard as a negative outcome for both player and game)
I am not able to extrapolate with confidence what the analogous outcome would be in the company/worker side of the equation. I would confidently say, out of just anecdotal experience, it's not trending in a good direction.
Indeed, taking the work of others, stripping the license and putting it on a photocopier is the most efficient way of "work". This is progress, not a problem.
This is an unnecessary hyperbole. It's like saying that your reportees do all the work for you. You need to put in an effort to understand the strengths and weaknesses of AI and put it to good work and make sure to double check its result. Low-skill individuals are not going to get great results for moderately complex tasks with AI. It's absurd to think it will do "all the work". I believe we are on the point of SW engineering skills shifting from understanding all the details of programming language and tooling more to higher level thinking and design.
Although I see without proper processes (code reviews, guidelines, etc.) use of AI can get out of hand to the point of a very bloated and unmaintainable code base. Well, as with any powerful technology it has to be handled with care.
I never saw anyone on HN bemoan the “environmental damage” of data centers until LLMs started popping up. As if all other software and internet tech is necessary and “worth it”?
Those other levels of abstraction that you mentioned are deterministic and predictable. I think that's a massive difference and it justifies why one should be more skeptical towards AI generated code (skeptical doesn't mean "outright dismissing" tbf).
Fun fact that we have burned those bridges years ago. Every time big codebase updates it's main optimizing toolchain (compiler/linker) ... there is nothing deterministic and predictable. There is only what has been re-tested and what has not been.
Deterministic tab completion generates horseshit half the time too, at the end of the day you have to know what you're doing in either case imo. You can also run an LLM in deterministic mode, but performance is usually worse.
I think this boils the problem down solely to the individual. What if companies realised this issue and made a period of time during the day devoted solely to learning, knowledge management and innovation. If their developers only use AI to be more productive, then the potential degradation of intellect could stifle innovation and make the company less competitive in the market. It would be interesting if we start seeing a resurgence of Google's mythical 10% rule with companies more inclined to let any employee create side projects using AI (like Cursor) that could benefit the company.
The problem is motivation. I’ve worked at companies that use policies like this, but you often only see a handful of def motivated folks really make use of it.
Your cognitive abilities to do programming may cognitively decline, but that's not the aim here is it? The aim is to solve a problem for a user, not to write code? If we solve that using different tools maybe our cognitive abilities will just focus on something else?
Exactly. If there is a tool that can do a lot of the low level work for us, we are free to do more of the higher level tasks and have higher output overall. Programming should be just a means to an end, not the end itself.
Or, more likely, most of us will be much lower paid or simply out of a job as our former customers input their desires into the AI themselves and it spits out what they want.
Are you going to take my washing machine next? AI is a gateway to spend more time, to do whatever you want. It's your decision to let your brain rot away or not.
The purpose of the least resistance instinct is to conserve resources of the organism due to scarcity of food. Consequently in absence of scarcity of food this instinct is suboptimal.
In general time management isn't trivial. There's risk management, security, technical debt, incremental degradation, debuggability and who knows what else, we only recently began to understand these things.
True. Still, the idea that short-term gains = long-term losses does not generalize, because it depends on the shape of the reward landscape. In the common, simple case, the short-term wins are steps leading straight to long-term wins. I'd go as far as to say, it's so common that we don't notice or talk about it much.
We also never gain any time from productivity. When sewing machines were invented and the time to make a garment went down 100x, you didn’t work 15 minutes a day.
Instead, those people are actually now much poorer and work much more. What was a respectable job is grunt work, and they’re sweat shop warm bodies.
The gains of their productivity was, predictably, siphoned upwards.
For me the question is different- you can ask "how to become better coder" for which the answer might be - to write more code. But the question I'm asking is what's the job of system/software developer. And to that the answer is very different. At least for me it is"building solutions to business problems, in the most efficient way". So in that case more/better code does not necessarily bring you closer to your goal.
> For me the question is different- you can ask "how to become better coder" for which the answer might be - to write more code.
The problem with this answer is that there is little to learn from a lot of code that is written in a corporate environment. You typically only learn from brutally hard bleeding-edge challenges that you set to yourself, which you will barely ever find at work.
Good point I think there is a fundamental difference in the way people see software engineering, this doesn't have anything to do with generative AI, the disagreement existed way before then:
* for some it is a form of art or craftsmanship, they take pride of their work and view it not just as a means to an end, they hone their skills just like any craftsman will, they believe that the quality of the work and the goal or purpose it serves are intrinsically linked and can never be separated
* for others its a means to an end, the process or object it produces is irrelevant EXCEPT for the purpose it serves, only the end goal matters, things like software quality are irrelevant distractions that must be optimized away since they serve no inherent purpose
There is no way to reconcile this, these are just radically different perspectives.
If you only care about the goal then of course raising questions about the quality of the output of current state of generative AI will be of no concern to you.
> against the basic human (animal?) instinct to take the path of least
resistance.
Fitness. Nobody takes a run because they're needing to waste a half
hour. Okay, some people just have energy and time to burn, and some
like to code for the sake of it (I used to). We need to do things we
don't like from time to time in order to stay fresh. That's why we
have drills and exercises.
>the whole reason for the "AI epidemic" is that people are avoiding effort like the plague.
I think AI is being driven most by those interested in the "promise" of eliminating workers altogether.
But, if by "AI epidemic", you more mean the increasing uptake of AI by workers/devs themselves (versus the overall frenzy), then I think the characterizations of "avoiding effort" and "laziness" are a bit harsh. I mean, of course people don't generally want to take the path of most resistance. And, there's value in efficiency being an "approach to life".
But, I think there's still more from the human side: that is, if the things you're spending your time on have less value of the sort emphasized in your environment, then your interest will likely naturally wane over time. I think this is especially true of something tedious like coding.
So, I believe there's a human/social element that goes beyond the pressure to keep up for the sake of job security.
What I meant specifically was overreliance on AI, in the sense that I'm now hearing that many junior devs can't do basic coding without AI.
(The parallel that comes to my mind is that programmers raised without garbage collection learned to write more efficient code, and now a text editor uses several gigabytes. But yeah, someone called me a "dinosaur" already in this thread ;)
Though this "increase in incompetence" might be a case of selection effect rather than degradation of ability, i.e. people who wouldn't even have gotten the job before are now getting into the industry. I'm not sure, it's probably a bit of both.
> whole reason for the "AI epidemic" is that people are avoiding effort like the plague
sounds like you're against corporate hierarchy too? or would you agree that having underlings do stuff for you at a reasonable price helps you achieve more?
Well, we're about to automate/replace the entire laborer class, or at least we're putting trillions of dollars into making that happen as soon as possible.
I'm not sure what effect that will have on the hierarchy, since it seems like they will replace the top half of the company as well.
The biggest problem what I have with using AI for software engineering is that it is absolutely amazing for generating the skeleton of your code, boilerplate really and it sucks for anything creative. I have tried to use the reasoning models as well but all of them give you subpar solutions when it comes to handling a creative challenge.
For example: what would be the best strategy to download 1000s of URLs using async in Rust. It gives you ok solutions but the final solution came from the Rust forum (the answer was written 1 year ago) which I assume made its way into the model.
There is also the verbosity problem. Calude without the concise flag on generates roughly 10x the required amount of code to solve a problem.
Maybe I am prompting incorrectly and somehow I could get the right answers from these models but at this stage I use these as a boilerplate generator and the actual creative problem solving remains on the human side.
Personally I've found that you need to define the strategy yourself, or in a separate prompt, and then use a chain-of-thought approach to get to a good solution. Using the example you gave:
Hey Chat,
Write me some basic rust code to download a url. I'd like to pass the url as an string argument to the file
Then test it and expand:
Hey Chat,
I'd like to pass a list of urls to this script and fetch them one by one. Can you update the code to accept a list of urls from a file?
Test and expand, and offer some words of encouragement:
Great work chat, you're really in the zone today!
The downloads are taking a bit too long, can you change the code so the downloads are asynchronous. Use the native/library/some-other-pattern for the async parts.
Whew, that's a lot to type out and you have to provide words of encouragement? Wouldn't it make more sense to do a simple search engine query for a HTTP library then write some code yourself and provide that for context when doing more complicated things like async?
I really fail to see the usefulness in typing out long winded prompts then waiting for information to stream in. And repeat...
I'm going the exact opposite way. I provide all important details in the prompt and when I see that the LLM understood something wrong, I start over and add the needed information to the prompt. So the LLM either gets it on the first prompt, or I write the code myself. When I get the "Yes, you are right ..." or "now I see..." crap, I throw everything away, because I know that the LLM will only find shit "solutions".
I have heard a few times that "being nice" to LLMs sometimes improves their output quality. I find this hard to believe, but happy to hear your experience.
Examples include things like, referring to LLM nicely ("my dear"), saying "please" and asking nicely, or thanking.
Well consider it's training data. I could easily see questions on sites like stack overflow having better quality answers when the original question is asked nicely. I'm not sure if it's a real effect or not but I could see how it could be. A rudely asked question will have a lot of flame war responses.
I use to do the "hey chat" all the time out of habit and when I thought the language model was something more like AI in a movie than what it is. I am sure it makes no difference beyond the user acting different and possibly asking better questions if they think they are talking to a person. Now for me, it looks completely ridiculous.
I agree completely with all you said however Claude solved a problem I had recently in a pretty surprising way.
So I’m not very experienced with Docker and can just about make a Docker Compose file.
I wanted to setup cron as a container in order to run something on a volume shared with another container.
I googled “docker compose cron” and must have found a dozen cron images. I set one up and it worked great on X86 and then failed on ARM because the image didn’t have an ARM build. This is a recurring theme with Docker and ARM but not relevant here I guess.
Anyway, after going through those dozen or so images all of which don’t work on ARM I gave up and sent the Compose file to Claude and asked it to suggest something.
It suggested simply use the alpine base image and add an entry to its crontab, and it works perfectly fine.
This may well be a skill issue but it had never occurred to me to me that cron is still available like that.
Three pages of Google results and not a single result anywhere suggesting I should just do it that way.
Of course this is also partly because Google search is mostly shit these days.
Maybe you would have figured it out if you thought a bit more deeply about what you wanted to achieve.
You want to schedule things. What is the basic tool we use to schedule on Linux? Cron. Do you need to install it separately? No, it usually comes with most Linux images. What is your container, functionally speaking? A working Linux system. So you can run scripts on it. Lot of these scripts run binaries that come with Linux. Is there a cron binary available? Try using that.
Of course, hindsight is 20/20 but breaking objectives down to their basic core can be helpful.
With respect, the core issue here is you lacked a basic understanding of Linux, and this is precisely the problem that many people — including myself – have with LLMs. They are powerful and useful tools, but if you don’t understand the fundamentals of what you’re trying to accomplish, you’re not going to have any idea if you’re going about that task in the correct manner, let alone an optimal one.
As I understand 'reasoning' is a very misleading term. As far as I can tell, AI reasoning is a step to evaluate the chosen probabilities. So maybe you will get less hallucinations but it still doesn't make AI smart.
Like, dropping this in the middle of the conversation to force the model out of a "local minimum"? Or restarting the chat with that prompt? I'm curious how you use it to make it more effective.
Interestingly many here fail to note that development of code is a lot about debugging, not only about writing. Is also about being able to dig/search/grok the code, which is like... reading it.
It is the debugging part to me, not only the writing, that actually teaches you what IS right, and what not. Not the architectural work, not the LLM spitting code part, not the deployment, but the debugging of the code and integration. THIS is what teaches you, writing alone teaches you nothing... you can copy by hand programs and understand zero of what they do unless inspecting intermediate results.
To hand-craft a house is super romantic and nice, etc. Is a thing people did for a lifetime for ages, not alone usually - with family and friends. But people today live in houses/apartments that had their foundations produced by automated lines (robots) - the steel, the mixture for the concrete, etc. And people yet live in the houses built this way, designed with computer which automated the drawing. I fail to understand while this is bad?
I asked it once to simplify code it had written and it refused. The code it wrote was ok but unnecessary in my view.
Claude 3.7:
> I understand the desire to simplify, but using a text array for .... might create more problems than it solves. Here's why I recommend keeping the relational approach:
( list of okay reasons )
> However, I strongly agree with adding ..... to the model. Let's implement that change.
I was kind of shocked by the display of opinions. HAL vibes.
My experience is, that it very often reacts to a simple question with apologizing and completely flipping it's answer 180 degrees. I just ask for explanation like "is this a good way to do x,y,z?" and it goes "I apologize, you are right to point out flaw in my logic. Lets do it the opposite way."
I shudder to think that all these LLMs were trained on internet comments.
Of course, only the sub-intelligent would train so-called "intelligence" on the mostly less-than-intelligent, gut-feeling-without-logic folks' comments.
It's like that ancient cosmology with turtles all the way down, except this is dumbasses, very confident dumbasses who have lots of cash.
It's going to be interesting to see the AI generation arriving in the workplace, ie kids who grew up with ChatGPT, and have never learned to find something in a source document themselves. Not even just about coding, about any other knowledge.
> It's going to be interesting to see the AI generation arriving in the workplace, ie kids who grew up with ChatGPT, and have never learned to find something in a source document themselves.
I am from the generation whose only options on the table were RTFM and/or read the source code. Your blend of comment was also directed at the likes of Google and StackOverflow. Apparently SO is not a problem anymore, but chatbots are.
I welcome chatbots. They greatly simplify research tasks. We are no longer bound to stake/poorly written docs.
I think we have a lot of old timers ramping up on their version of "I walked 10 miles to school uphill both ways". Not a good look. We old timers need to do better.
> Your blend of comment was also directed at the likes of Google and StackOverflow.
No, it wasn't.
What such comments were directed at, and with good reason, where 'SO-"Coders"', aka. people when faced with any problem, just googled a vague description of it, copypasted the code from the highest scoring SO answer into their project, and called it a day.
SO is a valueable resource. AI Systems are a valueable resource. I use both every day, same as I almost always have one screen dedicated to some documentation page.
The problem is not using the tools available. The problem is relying 100% on these tools, with no skill or care of ones own.
But skill will be needed. It's everything that is still necessary between nothing and (good) software existing. It will just rapidly be something that we are not used to, and the rate of change will be challenging, specially for those with specialized, hard earned skills and knowledge that become irrelevant.
Yes, they were, for reasons that have turned out to be half-right and half-wrong. At least by some people. Ctrl-c ctrl-v programming was widely derided, but people were also worried about a general inability to RTFM.
I had the good fortune to work with a man who convinced me to go read the spec of some of the programming languages I used. I'm told this was reasonably common in the days of yore, but I've only rarely worked on a team with someone else who does it.
Reading a spec or manual helps me understand the language and aids with navigating the text later when I use it as documentation. That said, if all the other programmers can do their jobs anyway, is it really so awful for them to learn from StackOverflow and Google? Probably not.
I also try to read the specification or manual for tools I use, and find the biggest benefit is simply knowing what that tool is capable of, and what the recommended approach to a given problem is when using that tool. Even just skimming the modules available in a language's standard library can get you a long way.
I was once able to significantly reduce the effort for some feature just by reading the elasticsearch docs and finding that there was a feature (aliases) that did more or less exactly what we needed (it still needed some custom code, but much less than initially thought). Nobody else had bothered to look at the docs in detail.
I agree. Claude is very useful to me because I know what I am doing and I know what I want it to do. Additionally, I keep telling my friend who is studying data science to use LLMs to his advantage. He could learn a lot and be productive.
Chatbots like Copilot, Cursor, Mistral, etc serve the same purpose that StackOverflow does. They do a far better job at it, too.
> The problem is not using the tools available. The problem is relying 100% on these tools, with no skill or care of ones own.
Nonsense. The same blend of criticism was at one point directed at IDEs and autocompletion. The common thread is ladder-pullers complaining how the new generation doesn't use the ladders they've used.
at risk of sounding like a grandpa this is nothing like SO. AI is just a tool for sure, one that "can" behave like an super enhanced SO and Google, but for the first time ever it can actually write for you and not just piddly lines but entire codebases.
i think that represents a huge paradigm shift that we need to contend with. it isn't just "better" research. and i say this as someone who welcomes all of this that has come.
IMO the skill gap just widens exponentially now. you will either have the competent developers who use these tools accelerate their learning and/or output some X factor, and on the other hand you will have literally garbage being created or people who just figure out they can now expend 1/10 the effort and time to do something and just coast, never bother to even understand what they wrote.
just encountered that with some interviews where people can now scaffold something up in record time but can't be bothered to refine it because they don't know how. (ex. you have someone prompting to create some component and it does it in a minute. if you request a tweak, because they don't understand it they just keep trying re-prompt and micromanage the LLM to get the right output when it should only take another minute for someone experienced.)
Strong agree. There have been people who blindly copied answers from Stack Overflow without understanding the code, but most of us took the time to read the explanations that accompanied the answers.
While you can ask the AI to give you additional explanations, these explanations might be hallucinations and no one will tell you. On SO other people can point out that an answer or a comment is wrong.
> ex. you have someone prompting to create some component and it does it in a minute. if you request a tweak, because they don't understand it they just keep trying re-prompt and micromanage the LLM to get the right output when it should only take another minute for someone experienced.
This is it. You will have real developers, as you do today, and developers who are only capable of creating what the latest AI model is capable of creating. They’re just a meat interface for the AI.
> Your blend of comment was also directed at the likes of Google and StackOverflow. Apparently SO is not a problem anymore
And it kind of was a problem. There was an underclass of people who simply could not get anything done if it wasn't already done for them in a Stackoverflow answer or a blog post or (more recently and bafflingly) a Youtube video. These people never learned to read a manual and spent their days flailing around wondering why programming seemed so hard for them.
Now with AI there will be more of these people, and they will make it farther in their careers before their lack of ability will be noticeable by hiring managers.
I would even say there's a type of comment along the lines of "we've been adding Roentgens for decades, surely adding some more will be fine so stop complaining".
As a second-order effect, I think there's a decline in expected docs quality (of course depends on the area). Libraries and such don't expect people to read through them, so they are spotty and haphazard, with only some random things mentioned. No wider overviews and explanations, and somewhat rightly so, why try to write it if (nearly) no one will read it. So only tutorials and Q&A sites remain besides of API dumps.
.. for the brief period of time before machines will take care of the whole production cycle.
Which is a great opportunity btw to drive forward a transition to a post-monetary, non-commercial post-scarcity open-source open-access commons economy.
The issues I see are that private chats are information blackholes whereas public stuff like SO can show up in a search and help more than just the original asker (can also be easily referenced / shared). Also the fact that these chatbots are wrong / make stuff up a lot.
That's not really a problem with LLMs, which are trained on material that is, or was, otherwise accessible on the Internet, and they themselves remain accessible.
No, the problem is created and perpetuated by people defaulting to building communities around Discord servers or closed WhatsApp groups. Those are true information black holes.
And yes, privacy and commons are in opposition. There is a balance to be had. IMHO, in the past few years, we've overcorrected way too much in the privacy direction.
Unfortunately though it isn’t actually privacy in most cases. It’s just a walled garden where the visibility is limited to users and some big tech goons who eventually will sell it all.
I think it's unwise to naively copy code from either stack overflow or an AI, but if I had to choose, I'd pick the one that had been peer-reviewed by other humans every time.
We old timers read the source code, which is a good proxy for what runs on the computer. From that, we construct a mental model of how it works. It is not "walking uphill 10 miles both ways". It is "understanding what the code actually does vs what it is supposed to do".
So far, AI cannot do that. But it can pretend to, very convincingly.
> We are no longer bound to stake/poorly written docs.
from what i gather, the training data often contains those same poorly written docs and often a lot of poorly written public code examples, so… YMMV with this statement as it is often fruit from the same tree.
(to me LLMs are just a different interface to the same data, with some additional issues thrown in, which is why i don’t care for them).
> I think we have a lot of old timers ramping up on their version of "I walked 10 miles to school uphill both ways". Not a good look. We old timers need to do better.
it’s a question of trust for me. with great power (new tools) comes great responsibility — and juniors ain’t always learned enough about being responsible yet.
i had a guy i was doing arma game dev with recently. he would use chatgpt and i would always warn him about not blindly trusting the output. he knew it, but i would remind him anyway. several of his early PRs had obvious issues that were just chatgpt not understanding the code at all. i’d point them out at review, he’d fix them and beat himself up for it (and i’d explain to him it’s fine don’t beat yourself up, remember next time blah blah).
he was honest about it. he and i were both aware he was very new to coding. he wanted to learn. he wanted to be a “coder”. he learned to mostly use chatgpt as an expensive interface for the arma3 docs site. that kind of person using the tools i have no problem with. he was honest and upfront about it, but also wanted to learn the craft.
conversely, i had a guy in a coffee shop recently claim to want to learn how to be a dev. but after an hour of talking with him it became increasingly clear he wanted me to write everything for him.
that kind of short sighted/short term gain dishonesty seems to be the new-age copy/pasting answers from SO. i do not trust coffee shop guy. i would not trust any PR from him until he demonstrates that he can be trusted (if we were working together, which we won’t be).
so, i get your point about doom and gloom naysaying. but there’s a reason for the naysaying from my perspective. and it comes down whether i can trust individuals to be honest about their work and how they got there and being willing to learn, or whether they just want to skip to end.
essentially, it’s the same copy/pasting directly from SO problem that came before (and we’re all guilty of).
Oh, heck. We didn’t need AI to do that. That’s been happening forever.
It’s not just bad optics; it’s destructive. It discourages folks from learning.
AI is just another tool. There’s folks that sneer at you if you use an IDE, a GUI, a WYSIWYG editor, or a symbolic debugger.
They aren’t always boomers, either. As a high school dropout, with a GED, I’ve been looking up noses, my entire life. Often, from folks much younger than me.
It’s really about basic human personal insecurity, and we all have that, to some degree. Getting around it, is a big part of growing up, so a lot of older folks are actually a lot less likely to pull that crap than you might think.
> Apparently SO is not a problem anymore, but chatbots are.
I think the same tendency of some programmers to just script kiddie their way out of problems using SO answers without understanding the issue will be exacerbated by the proliferation of AI which is much more convincing about wrong answers.
It's not a binary. You don't have to hate or welcome chatbots, no in between. We all use them, but we all also worry about the negatives, same with SO.
And they were right. People who just blindly copy and paste code from SO are absolutely useless when it comes to handling real world problems, bugs etc.
Apparently from what I've read, universities are already starting to see this. More and more students are incapable of acquiring knowledge from books. Once they reach a point where the information they need cannot be found in ChatGPT or YouTube videos they're stuck.
I wonder if Google Gemini is trained on all the millions of books that were scanned and that Google were not able to be used for their original purposes?
As other AI companies argue, copyright doesn't apply when training, it should give Google a huge advantage to be able to use all the worlds books they scanned.
It's about putting together individual pieces of information to come up with an idea about something. You could get 5 books from the library and spend an afternoon skimming them, putting sticky notes on things that look interesting/relevant, or you could hope that some guy on Youtube has already done that for you and has a 7 minute video summarizing whatever it is you were supposed to be looking up.
I am unable to acquire knowledge from the books since 35 years ago. Had to get by with self-directed learning. The result is patchy understanding, but lot of faith in myself
"if they want" and "lazy" are the load-bearing phrases in your response. If we take them at face value, then it follows that:
1) There are no idiots who want to do better than AI, and/or
2) All idiots are lazy idiots.
The reason we're even discussing LLMs so much in the first place, is AI can and does things better than "idiots"; hell, it does things better than most people, period. Not everything in every context, but a lot of things in a lot of contexts.
Like run-of-the-mill short-form writing, for example. Letters, notices, copywriting, etc. And yes, it even codes better than general population.
If it's been indexed by a Web search engine surely it's in a training dataset. The most popular Web search engines are the least efficient way of finding answers these days.
But just because it's in the training set doesn't mean the model retains the knowledge. It acquires information that is frequently mentioned and random tidbits of everything else. The rest can be compensated with the 20 web searches models more get. That's great when you want a react dropdown, but for that detail that's mentioned in one Russian-speaking forum and can otherwise only be reduced by analysing the leaked Windows XP source code AI will continue to struggle for a bit.
Of course AI is incredibly useful both for reading foreign language forums and for analysing complex code bases for original research. AI is great for supercharging traditional research
Learning isn't just about rote memorisation of information but the continuous process of building your inquisitive skills.
An LLM isn't a mind reader so if you never learn how to seek the answers you're looking for, never curious enough to dig deeper, how would you ever break through the first wall you hit?
In that way the LLM is no different than searching Google back when it was good, or even going to a library.
I recently interviewed a candidate with a degree in computer science. He was unable to explain to me how he would have implemented the Fibonacci sequence without chatGPT.
We never got to the question of recursive or iterative methods.
The most worrying thing is that the LLM were not very useful three years ago when he started university. So the situation is not going to improve.
Yep, I still can not understand how programmers unable to do fizzbuzz still have a sw engineer career. I have never worked with one like that, but I have seen so many of them on interviews.
I have seen IT consulting and service companies employ ‘developers’ who are barely capable of using software to generate views in a banking software package and unable to write a line of Java (which was the language used on the project).
When the client knows absolutely nothing about it and is not supported by someone competent, they end up employing just anyone.
This applies to both construction, IT and probably everthing.
Unless you're also a programmer, it's very difficult to judge someone else's programming ability. I've worked places where I was the only remotely-technical person there, and it would have been easy to fake it with enough confidence and few enough morals.
But it is. Some knowledge and questions are simply increasingly outdated. If the (correct) answer on how to implement Fibonacci is one LLM query away, then why bother knowing? Why should a modern day web developer be able to write assembly code, when that is simply abstracted away?
I think it will be a hot minute before nothing has to be known and all human knowledge is irrelevant, but, specially in CS, there is going to be a tremendous amount of rethinking to do, of what is actually important to know.
Not everyone does web dev. There are many jobs where it is necessary to have a vague idea of memory architecture.
LLM are very poor in areas such as real time and industrial automation, as there is very little data available for training.
Even if the LLM were good, we will always need someone to carry out tests, formal validation, etc.
Nobody want to get on a plane or in a car whose critical firmware has been written by an LLM and proofread by someone incapable of writing code (don't give ideas to Boeing ).
The question about Fibonacci is just a way of gently bringing up other topic.
My answer and example does not really care about the specifics.
I see nothing mention here as something that a human inherently needs to concern themselves with, because none of these things are things that humans inherently care about. CS as a discipline is a layer between what humans want and how to make computers do these things. If todays devs are so far detached from dealing with 1s and 0s (which is not at all how it obviously had to develop) why would any of the other parts you mention be forever necessary given enough artificial intelligence?
Sure, it's fun (as a discipline) for some of us, but humans inherently do not care about computer memory or testing. A good enough AI will abstract it away, to the degree that it is possible. And, I believe, it will also do a better job than any human ever did, because we are actually really, really bad at these things.
Not outdated. If you know the answer on how to implement Fibonacci, you are doing it wrong. Inferring the answer from being told (or remembering) what is a Fibonacci number should be faster than asking an LLM or remembering it.
> Inferring the answer from being told (or remembering) what is a Fibonacci number should be faster than asking an LLM or remembering it.
Really?
If I were to rank the knowledge relevant to this task in terms of importance, or relevance to programming in general, I'd rank "remembering what a Fibonacci number is" at the very bottom.
Sure, it's probably important in some areas of math I'm not that familiar with. But between the fields of math and hard sciences I am familiar with, and programming as a profession, by far the biggest (if not the only) importance of Fibonacci sequence is in its recursive definition, particularly as the default introductory example of recursive computation. That's all - unless you believe in the mystic magic of the Golden Ratio nonsense, but that's another discussion entirely.
Myself, I remember what the definition is, because I involuntarily memorize trivia like this, and obviously because of Fibonacci's salad joke. But I wouldn't begrudge anyone in tech for not having that definition on speed-dial for immediate recall.
I assume in an interview you can just ask for the definition. I don't think the interview is (should be) testing for your knowledge of the Fibonacci numbers, but rather your ability to recognize precisely the recursive definition and exploit it.
Already knowing the Fibonacci sequence isn't relevant:
> He was unable to explain to me how he would have implemented the Fibonacci sequence without chatGPT.
An appropriate answer could have been "First, I look up what the Fibonacci sequence is on Wikipedia..." The interviewee failed to come up with anything other than the chatbot, e.g. failed to even ask the interviewer for the definition of the sequence, or come up with an explaination for how they could look it up themselves.
Why should my data science team know what 1+1 is, when they can use a calculator? It's unfair to disqualify a data scientist just for not knowing what 1+1 is, right?
I’m 10 years in software and was never bothered to remember how to implement it the efficient way and I know many programmers who don’t know even the inefficient way but kick ass.
I once got that question in an interview for a small startup and told the interviewer: with all due respect what does that have to do with the job I’m going to do and we moved on to the next question (still passed).
You don’t need to memorize how to compute a Fibonacci number. If you are a barely competent programmer, you should be capable of figuring it out once someone tells you the definition.
If someone tells you not to do it recursively, you should be able to figure that out too.
Interview nerves might get in your way, but it’s not a trick question you need to memorize.
But I'm sure there would be some people that given the following question would not be able to produce any code by themselves:
"Let's implement a function to return us the Nth fibonnaci number.To get a fib (fibonacci) number you add the two previous numbers, so fib(N)=fib(N-1)+fib(N+2). The starting points are fib(0)=1 and fib(1)=1. Let's assume the N is never too big (no bigger than 20)."
And that's a problem if they can't solve it.
OTOH about 15 years ago I heard from a friend that interviewed candidates that some people couldn't even count all the instances of 'a' in a string. So in fact not much has changed, except that it's harder to spot these kind of people.
There's nothing wrong with that. But once the interviewer tells you that the next number is the sum of the previous two, starting with 0 and 1, any programmer with a pulse should be good to go.
If some interviewer asked me what recursion was or how to implement it, I'd answer, and then ask them if they can think of a good use case for a duff's device.
Duff's device hasn't relevant for over 25+ years and there is no reason why anybody who learnt to program within the past 20 years should even know what it is, while recursion is still often the right answer.
That's a great question because there are 3 levels of answer: 1) I don't know what recursion is 2) This is what recursion is 3) This is what iteration is
I suppose it depends on the position, but if your company is building CRUD apps with Postgres and you're asking candidates about Fibonacci, you're wasting both your time and, more importantly, the candidate's time.
Instead, you're better off focusing on relevant topics like SQL, testing, refactoring, and soft skills.
These "clever" questions are pointless. They either serve to stroke the interviewer’s ego: "Look, I know this CS 101 problem because I just looked it up!", or to create a false image of brilliance: "Everyone here can invert binary trees!"
You could say the same about Google, where there's an entire generation who no longer needed to trawl through encyclopedias to find relevant information. I think we're doing just fine.
Actually I did. Saw some kids just googling an exercise question instead of googling the topic. Trying to find the answer to the question while avoiding to understand the underlying topic.
Is GenAI after gen alpha? I think it depends on whether agents become a thing. Assuming agents become a thing before the end of this decade, we could see a divide between people born before we had ai agents and after.
I think we already saw this manifestation a few decades ago, with kids who can't program without an IDE.
IDE's are fantastic tools - don't get me wrong - but if you can't navigate a file system, work to understand the harness involved in your build system, or discern the nature of your artefacts and how they are loaded and interact in your target system, you're not doing yourself any favours by having the IDE do all that work for you.
And now what we see is people who not only can't program without an IDE, but can't be productive without a plugin in that IDE, doing all the grep'ing and grok'ing for them.
There has been a concerted effort to make computers stupider for stupid users - this has had a chilling effect in the development world, as well. Folks, if you can't navigate a filesystem with confidence and discern the contents therein, you shouldn't be touching the IDE until you can.
I have (older, but that generation) colleagues who simply stop working if there’s something wrong with the build process because they don’t understand it and can’t be bothered to learn. To be fair to them, the systems in question are massively over complicated and the project definitions themselves are the result of copy-paste engineering.
Unfortunately they also project their ignorance, so there’s massive pushback from more senior employees when anyone who does understand tries to untangle the mess and make it more robust.
The same thing will happen with these ML tools in the future, mark my words: writing code will come to be seen as “too complex and error prone” and barely working, massively inefficient and fragile generated code bases will be revered and protected with “don’t fix what isn’t broken”
I worked early in my career with a developer who printed out every part of the codebase to review and learn it. He viewed text search tools, file delineations, and folder structures as crutches best avoided.
Yes indeed, that is a perfectly reasonable approach to take, especially in the world of computing where rigorous attention to detail is often rewarded with extraordinary results.
I have very early (and somewhat fond) memories of reviewing every single index card, every single hole in the punch-tape, every single mnemonic, to ensure there were no undiscovered side-effects.
However, there is a point where the directory structure is your friend, you have to trust your colleagues ability to understand the intent behind the structure, and you can leverage the structure to gain great results.
Always remember: software is a social construct, and is of no value until it is in the hands of someone else - who may, or may not, respect the value of understanding it...
It is truly astonishing how bad things have become, so quickly. Just the other day, I found myself debating someone about a completely fabricated AWS API call.
His justification was that some AI had endorsed it as the correct method. There are already out there salaried professionals supporting such flawed logic.
Kinda like the kids who only learned to drive with Waze.
When they are stuck without it, they get hopelessly lost. They feel strangled, distracted, and find it hard to focus on the road. For about 3 days. Then they pretty quickly get up to speed, and kinda remember where they used to make the left turns when they had GPS, and everything is dandy.
But it happens so infrequently, that it's really not worth the time thinking about it.
And the truth is, I am just guessing what happens after 3 days - since anyone who grew up with GPS will never be that long without it.
Otherwise I recall an old saying: 'Robots will take the job of engineers as soon as being able to figure out and make what the client needs, not what the client asks. I think we are safe'. Along this, I hope AI becomes a good tool, subordinate, or maximum a colleague. But not a father figure for big infants.
Please elaborate on what you think the issues will be? Why read documentation when you can simply ask questions and get better contextualised answers faster? Whatever edge issues manifest they will be eclipsed by greater productivity.
Because learning improves your internal LLM which allows you to creatively solve tasks without using external ones. Additionally it is possible to fine tune your internal LLM for the tasks useful for your specific interests, the fun stuff. And external llms are too generalised
It was also fun, though. Unfortunately, taking up programming as a career is a quick way to become disillusioned about what matters (or at least what you're being paid for).
The issue is, glib understanding replaces actual application.
Its one thing to have the AI work through the materials and explain it.
Its another thing to have lost a lot of the background information required to sustain that explanation.
Greater productivity, is of course, a subjective measure. If that's all that matters, then being glib is perfectly acceptable. But, the value of that productivity may change in a different context - for example, I find it very difficult to go back to AI-generated code some months later, and understand it - unless I already took the time earlier to discern the details.
I asked specifically the AI i interact with not to generate code or give code examples, but to highlight topics i need to better my understanding in to answer my own questions. I think it enhances my personal competences better that way, which i value above 'productivity'. As i learn more, i do become more efficient and productive.
Some of the recommendations it comes with are hard programming skills, others are project management oriented.
I think this is a better approach personally to use this kind of technology as it guides me to better my hard and soft skills. long term gains over short term gains.
Then again, i am under no pressure or obligation to be productive in my programming. I can happily spend years to come up with a good solution to a problem, rather than having a deadline which forces to cut as many corners as possible.
I do think that this is how it should be in professional settings, but respect a company doesn't always have the resources (time mostly) to allow for it. Its sad but true.
Perhaps someday, AIs will be far enough to solve problems properly, and think of the aspects of a problem the person sending the question has not. AIs can generate quite nice code, but only as good as the question asked.
If the requester doesn't spend time to learn enough, they can never get an AI to generate good code. It will give what you ask for, warts and all!
I did spend some time trying to get AI to generate code for me. To me, it only highlighted the deficiencies in my own knowledge and ability to properly formulate the solution I needed. If i take the time to learn what is needed to formulate the solution fully, i can write the code to implement it myself, so the AI just becomes an augment to my typing speed, nothing else. This last part, is why i beleive it's better to have it guide my growth and learning, rather than produce something in the form of an actual solution (in code or algorithmically).
Some time circa late 1950s, a coder is given a problem and a compiler to solve it. The coder writes their solution in a high level language and asks the compiler to generate the assembly code from it. The compiler: I cannot generate the assembly code for you, that would be completing your work ... /sarcasm
On a more serious note: LLMs now are an early technology, much like the early compilers who many programmers didn't trust to generate optimized assembly code on par with hand-crafted assembly, and they had to check the compiler's output and tweak it if needed. It took a while until the art of compiler optimization was perfected to the point that we don't question what the compiler is doing, even if it generates sub-optimal machine code. The productivity gained from using a HLL vs. assembly was worth it. I can see LLMs progressing towards the same tradeoff in the near future. It will take time, but it will become the norm once enough trust is established in what they produce.
> The productivity gained from using a HLL vs. assembly was worth it.
You can be very productive in a good assembler (for example RollerCoaster Tycoon and RollerCoaster Tycoon 2 were written basically solo by Chris Sawyer in x86 assembler). The reason was rather that over the generations, the knowledge of writing code in assembly decayed because it got used less and less.
it's real, (but a reply on the forum suggests) Cursor has a few modes for chat, and it looks like he wasn't in the "agent" chat pane, but in the interactive, inline chat thingy? The suggestion is that this mode is limited to the size of what it can look at, probably a few lines around the caret?
Thus, speculating, a limit on context or a prompt that says something like "... you will only look at a small portion of the code that the user is concerned about and not look at the whole file and address your response to this..."
Other replies in the forum are basically "go RTFM and do the tutorial"!
The AI tools are good, and they have their uses, but they are currently at best at a keen junior/intern level, making the same sort of mistakes. You need knowledge and experience to help mentor that sort of developer.
Give it another year or two and I hope they the student will become the master and start mentoring me :)
My biggest worry about AI is that it will do all the basic stuff, so people will never have a chance to learn and move on to the more complex stuff. I think there's a name for this, but I can't find it right now. In the hands of a tenured expert though, AI should be a big step up.
Similar to the pre-COVID coding bootcamps and crash-courses we’ll likely just end up with an even larger cohort of junior engineers who have a hard time growing their career because of the expectations they were given. This is a shame but still, if such a person has the wherewithal to learn, the resources to do so are more abundant and accessible than they’ve ever been. Even the LLM can be used to explain instead of answer.
LinkedIn is awash with posts about being a ‘product engineer’ and ‘vibe coding’ and building a $10m startup over a weekend with Claude 3.5 and a second trimester foetus as a cofounder, and the likely end result there is simply just another collection of startups whose founding team struggles to execute beyond that initial AI prototyping stage. They’ll mistake their prompting for actual experience not realising just how many assumptions the LLM will make on their behalf.
Won’t be long before we see the AI startup equivalent of a rug-pull there.
Played a game called Beyond a Steel Sky (sequel to the older beneath a steel sky).
In the starting section there was an "engineer" going around fixing stuff. He just pointed his AI tool at the thing, and followed the instructions, while not knowing what he's doing at any point. That's what I see happening
I just did that with Cursor/Claude where I asked it to port a major project.
No code just prompts. Right now after a week it has 4500 compilation errors with every single file having issues requiring me to now go back and actually understand what its gone and done. Debatable whether it has saved time or not.
I did the same, porting a huge python script that had grown way too unwieldy to Swift. It undoubtedly saved me time.
Your project sounds a lot bigger than mine, 4500 is a lot of compilation errors. If it’s not already split into modules, could be a good time to do it piece by piece. Port one little library at a time.
I think it goes the same way as compilers so bits -> assembly -> c -> jvm and now you don't need mostly care what happens at lower levels because stuff works. With AI we are now basically the bits -> assembly phase so you need to care a lot about what is happening at a lower level.
To be honest you don't need to know the lower level things. It's just removing the need to remember the occasional boilerplate.
If I need to parse a file I can just chuck it a couple lines, ask it to do it with a recommended library and get the output in 15 minutes total assuming I don't have a library in mind and have to find one I like.
Of course verification is still needed but I'd need to verify it even if I wrote it myself anyway, same for optimization. I'd even argue it's better since it's someone else's code so I'm more judgemental.
The issue comes when you start trying to do complex stuff with LLMs since you then tend to halfass the analysis part and get led down the development path the LLM chose, get a mish mash of the AIs codestyle and yours from the constant fixes and it becomes a mess. You can get things implemented quickly like that, which is cool, but it feels like it inevitably becomes spaghetti code and sometimes you can't even rewrite it easily since it used something that works but you don't entirely understand.
Do you worry about calculators preventing people to master big number multiplication? That's an interesting question actually. When I was a kid, calculators were not as widespread and I could easily multiply 4-digit numbers in my brain. Nowadays I'd be happy to multiply 2-digit numbers without mistakes. But I carry my smartphone with me, so there's just no need to do so...
So while learning basic stuff is definitely necessary just like it's necessary to have understanding how to multiply or divide numbers of any size (kids learn that nowadays, right?), actually mastering those skills may be wasted time?
> actually mastering those skills may be wasted time?
this question is, i’m pretty sure it is safe to assume, the absolute bane of every maths teacher’s existence.
if i don’t learn and master the fundamentals, i cannot learn and master more advanced concepts.
which means no fluid dynamics for me when i get to university because “what’s the point of learning algebra, i’m never gonna use it in real life” (i mean, i flunked fluid dynamics but it was because i was out drinking all the time).
i still remember how to calculate compound interest. do i need to know how to calculate compound interest today? no. did i need to learn and master it as an application of accumulation functions? absolutely.
just because i don’t need to apply something i learned to master before doesn’t mean i didn’t need to learn and master it in order to learn and master something else later.
mastery is a cumulative process in my experience. skipping out on it with early stuff makes it much harder to master later stuff.
> Do you worry about calculators preventing people to master big number multiplication?
Honestly, I think we already see that in wider society, where mental arithmetic has more or less disappeared. This is in general fine ofc, but it makes it much harder to check the output of a machine if you can't do approximate calculations in your head.
> but they are currently at best at a keen junior/intern level
Would strongly disagree here. They are something else entirely.
They have the ability to provide an answer to any question but where its accuracy decreases significantly depending on the popularity of the task.
So if I am writing a CRUD app in Python/React it is expert level. But when I throw some Scala or Rust it is 100x worse than any junior would ever be. Because no normal person would confidently rewrite large amounts of code with nonsense that doesn't even compile.
And I don't see how LLMs get significantly better without a corresponding improvement in input data.
Not only that, still very far away from being good enough. An opinionated AI trying to convince me his way of doing things is the one true way and my way is not the right way, that's the stuff of nightmares.
Give it a few years and when it is capable of coding a customized and full-featured clone of Gnome, Office, Photoshop, or Blender, then we are talking.
It's because they "nerfed" Cursor by not sending the whole files anymore to Claude, but if you use RooCode, the performance is awesome and above average developer. If you have the money to pay for the queries, try it :)
It's not reasonable to say “I cannot do that as it would be completing your work”, no.
Your tool have no say in the morality of your actions, it's already problematic when they censor sexual topics but the tool makers feel entitled to configure their tools only to allow you some kind of use for your work then we're speedrunning to dystopia (as if it wasn't the case already).
Only if your actions would harm another human or yourself, right?
Anyhow in the caliban books the police robot could consider unethical and immoral things, and even contemplate harming humans, but it made it work really slow, almost tiring it.
Someone on my team complained to me about some seemingly relatively easy task yesterday. They claimed I was pushing more work onto them as I'm working on the backend and they are working on the frontend. This puzzled me so I tried it and ended up doing the work in about 1.5h
I did struggle through the poor docs of a relatively new library, but it wasn't hard.
This got me wondering: maybe they have become so dependent on AI copilots that what should have been an easy task was seen as insurmountably hard because the LLM didn't have info on this new-ish library.
Have you considered extra time it would take for some person other than yourself to get onboarded into the problem you are solving? It can quickly add 2-3 extra hours on top of that [seemingly easy] work.
I also didn't know the library nor have any exposure (yesterday was literally my first time looking at the docs for this particular library) and still got it done in 1.5h because someone has to do it.
And they've been at the company working with the FE stack longer than me by a few months!
I'm not even on the frontend team and decided to take the matter into my hands because I was pretty sure my ask wasn't too onerous so I wanted to double check. It had an out-of-the-box, first party add on package to do exactly what we needed to get data in the right shape on the backend, but the dev made it seem like I was pushing more work on FE. I'm just trying to get the right data format...
Maybe, but most people used to trust that their coworkers had the ability to quickly learn what needed to be learned to solve a small problem. AI has made people dependent on lose that skill a bit. I've seen myself people confronted with something they need to learn about and come back with something not much more than "AI said this, so here you go"
I think this is very real, and will act as a force towards center of gravity of the LLMs which means popular and established tech. It might even create a divide between ”prompt engineer engineers” and those who understand. That assumes using LLMs will not improve your understanding compared to traditional means which isn’t necessarily true. I dislike LLMs as coders but for learning new topics quickly they are better than obscure literature or googling for deep blog posts.
That said the pure prompt engineers might suffer similar fate as Stack Overflow engineers, for another reason: the limiting factor of building software is not shitting out code or even tests, it’s curbing the accumulated total complexity in the project as it grows. This is incredibly hard even for humans but the best engineers can reduce it. These most difficult problems have at least 2-3 properties that make them almost impossible for LLMs today: they’re non-local, hard to quantify and have enough constraints to mandate solutions that end in the fringes of training data set. Even simple self-contained LLM solutions introduce more complexity than necessary.
> I think this is very real, and will act as a force towards center of gravity of the LLMs which means popular and established tech
Maybe it means devs will invest more time in putting together quality docs because that'll be the deciding factor if an LLM can figure out how to use your library; interesting!
The thing is, even I hadn't seen the docs before yesterday, but I could intuit through experience working with similar libraries that my ask wasn't that onerous.
The gist of it was to take the JSON representation of an editor and convert it to Markdown. Every popular editor library has an add on or option to export Markdown as well as import Markdown. But on import/export, you then need to almost always write a small transformer for any custom visual elements encoded as text.
Why MD? Because the user is writing a prompt so we need it as MD so it makes sense to transact this data in MD. It just so happens that the library is newish and docs are sparse in some areas. But totally manageable just by looking at the source how to do it.
I can't say for certain where the disconnect is in this whole thing, but to me it felt like "this isn't easy (because the LLM can't do it), so we shouldn't do it this way".
This is quite a lot of code to handle in 1 file. The recommendation is actually good in the past(month - feels like 1 year of planning) I've made similar mistakes with tens of projects - having files larger than 500-600 lines of code - Claude was removing some of the code and I didn't have coverage on some of them and the end result was missing functionality.
Good thing that we can use .cursorrules so this is something that partially will improve my experience - until a random company releases the best AI coding model that runs on a Rassbery Pi with 4GB ram(yes this is a spoiler from the future).
> I've made similar mistakes with tens of projects - having files larger than 500-600 lines of code
Is it a mistake though ? Some of the best codebase I worked on were a few files with up to a few thousands LoC. Some of the worst were the opposite, thousands of files with less than a few hundred LoC in each of them.
With the tool that I use, I often find navigating and reading through a big file much simpler than having to have 20 files open to get the full picture of what I am working on.
At the end of the day, it is a personal choice. But if we have to choose something we find inconvenient just to be able to fit in the context window of an LLM, then I think we are doing things backward.
This is probably coming from the safety instructions of the model. It tends to treat the user like a child and don't miss any chance to moralize. And the company seems to believe that it's a feature, not a bug.
Hah, that's typical Sonnet v2 for you. It's trained for shorter outputs, and it's causing it to be extremely lazy. It's a well known issue, and coding assistants contain mitigations for this. It's very reluctant to produce longer outputs, usually stopping mid-reply with something like "[Insert another 2k tokens of what you've been asking for, I've done enough]". Sonnet 3.7 seems to fix this.
Not really, these assistants are all trained as yes-men, and the training usually works well.
It might be a conflict between shorter outputs and the "soft reasoning" feature that version of Sonnet has, where it stops mid-reply and reflects on what it has written, in an attempt to reduce hallucinations. I don't know what exactly triggers it, but if it triggers in the middle of the long reply, it notices that it's already too long (which is an error according to its training) and stops immediately.
Ah the cycle of telling people to learn to code... First tech journalists telling the public, then programmers telling tech journalists, now AI telling programmers... What comes next?
When I see juniors using LLMs, you cannot have technical debt because everything is recreated from scratch all the time. It's a disaster and no one learns anything, but people seem to love the hype.
I recently saw this video about how to use AI to enhance your learning instead of letting it do the work for you.[0]
"Get AI to force you to think, ask lots of questions, and test you."
It was based on this advice from Oxford University.[1]
I've been wondering how the same ideas could be tailored to programming specifically, which is more "active" than the conceptual learning these prompts focus on.
Some of the suggested prompts:
> Act as a Socratic tutor and help me understand X. Ask me questions to guide my understanding.
> Give me a multi-level explanation of X. First at the level of a child, then a high school student, and then an academic explanation.
> Can you explain X using everyday analogies and provide some real life examples?
> Create a set of practice questions about X, ranging from basic to advanced.
Ask AI to summarize a text in bullet points, but only after you've summarized it yourself. Otherwise you fail to develop that skill (or you start to lose it).
---
Notice that most of these increase the amount of work the student has to do! And they increase the energy level from passive (reading) to active (coming up with answers to questions).
I've been wondering how the same principles could be integrated into an AI-assisted programming workflow. i.e. advice similar to the above, but specifically tailored for programming, which isn't just about conceptual understanding but also an "activity".
Maybe before having AI generate the code for you, the AI could ask you for what you think it should be, and give you feedback on that?
That sounds good, but I think in practice the current setup (magical code autocomplete, and now complete auto-programmers) is way too convenient/frictionless, so I'm not sure how a "human-in-the-loop" approach could compete for the average person, who isn't unusually serious about developing or maintaining their own cognitive abilities.
Any ideas?
---
[0] Oxford Researchers Discovered How to Use AI To Learn Like A Genius
I think with programming, the same basic tension exists as with the smarter AI-enhanced approaches to conceptual learning: effectiveness is a function of effort, and the whole reason for the "AI epidemic" is that people are avoiding effort like the plague.
So the problem seems to boil down to, how can we convince everyone to go against the basic human (animal?) instinct to take the path of least resistance.
So it seems to be less about specific techniques and technologies, and more about a basic approach to life itself?
In terms of integrating that approach into your actual work (so you can stay sharp through your career), it's even worse, since doing things the human way doubles the time required (according to Microsoft), and adding little AI-tutor-guided coding challenges to enhance your understanding along the way increases that even further.
And in the context of "this feature needs to be done by Tuesday and all my colleagues are working 2-3x faster than me (because they're letting AI do all the work)... you see what I mean! It systemically creates the incentive for everyone to let their cognitive abilities rapidly decline.
Edit: Reposted this at top-level, since I think it's more important than the "implementation details" I was responding to.
It'll happen when these AI companies start trying to make a profit. Models will become unavailable and those remaining will jack up the prices for the most basic features.
From there these proompters will have 2 choices, do I learn how to actually code? or do I pay more for the AI to code for me?
Also it stops working well after the project grows too large, from there they'd need an actual understanding to be able to direct the AI - but going back and forth with an AI when the costs are insane isn't going to be feasible.
The brain is not a muscle, but it behaves like one with abilities you no longer use: it drops them.
Like speaking another language you once knew but haven’t used in years, or forgetting theorems in maths that once were familiar, or trying to focus/meditate on a single thing for a long time after spending years on infinite short content.
If we don’t think, then it’s harder when we have to.
Right now, LLMs tend to be like Google before all the ads and SEO cheating that made things hard to find on the web. Ads have been traded by assertive hallucinations.
These kinds of answers are really common, I guess you have to put a lot of work in to remove all those answers from training data. For example "no, I'm not going to do your homework assignment"
IMO, to be a good programmer, you need to have basic understanding of what a compiler does, what a build system does, what a normal processor does, what a SIMD process does, what your runtime's concurrency model does, what your runtime's garbage collector does and when, and more (what your deployment orchestrator does, how your development environment differ from the production...).
You don't need to have any understanding on how it works in any detail or how to build such a system yourself, but you need to know the basics.
Sure - we just need to learn to use the tools properly: Test Driven Development and/or well structured Software Engineering practices are proving to be a good fit.
Working with Cursor will make you more productive when/if you know how to code, how to navigate complex code, how to review and understand code, how to document it, all without LLMs. In that case you feel like having half a dozen junior devs or even a senior dev under your command. It will make you fly. I have tackled dozens of projects with it that I wouldn't have had the time and resources for. And it created absolutely solid solutions. Love this new world of possibilities.
Have a big Angular project, +/- 150 TS files. Upgraded it to Angular 19 and now I can optimize build by marking all components, pipes, services etc as "standalone" essentially eliminating the need for modules and simplifying code.
I thought it is perfect for AI as it is straight forward refactor work but would be annoying for a human.
1. Search every service and remove the "standalone: false"
2. Find module where it is declared, remove that module
3. Find all files where module was imported, import the service itself
Cursor and Claude constantly was losing focus, refactoring services without taking care of modules/imports at all and generally making things much worse no matter how hard "prompt engineering" I tried. I gave up and made a Jira task for a junior developer instead.
Yes it feels like refactoring would be a perfect use case for LLMs but it only works for very simple cases. I've tried several times to do bigger refactors spanning several files with Cursor and it always failed. I think it's a combination of context window being not big enough and also tooling/prompting could probably be improved to better support the use case of refactoring.
Some IDEs like Intellij Idea have structured search&replace feature. I think there are dedicated projects for this task, so you can search&replace things using AST and not just text.
May be it would make sense to ask AI to use those tools instead of doing edits directly?
How is a 'guess-the-next-token' engine going to refactor one's codebase?
Just because everyone's doing it (after being told by those who will profit from it that it will work) doesn't mean it's not insane. It's far more likely that they're just a rube.
At the end of using whatever tool one uses to help refactor one's codebase, you still have to actually understand what is getting moved to production, because you will be the one getting called at midnight on Saturday to fix the thing.
I would use find ... -exec, ripgrep and sed for this. If I have a junior or intern around I'd screen share with them and then have them try it out on their own.
Text files are just a database with a somewhat less consistent query language than SQL.
It doesn't have ADHD, it's more likely because they create too much small chunks in the recent versons of Cursor. So Cursor is looking at the project with a very small magnifying glass, and forgets what is the big picture (in addition to the context length issue).
Vibe coding is exactly like how Trump is running the country. Very little memory of history, shamefully small token window and lack of context, believes the last thing someone told it, madly hallucinating fake facts and entire schizophrenic ideologies, no insight into or care about inner workings and implementation details, constantly spins around in circles, flip flops and waffles back and forth, always trying to mitigate the symptoms of the last kludge with suspiciously specific iffy guessey code, instead of thinking about or even bothering to address the actual cause of the problem.
Or maybe I'm confusing cause and effect, and Trump is actually using ChatGPT or Grok to run the country.
BOFH vibe from this. I have also had cases of lazy ChatGPT for code generation, although not so obnoxious. What should be next - a digital spurs to nudge them in the right direction.
Oh what a middle finger that seemed to be. I had similar experience in the beginning with ChatGPT (1-2 years back?), until I started paying for a subscription. Now even if it's a 'bad idea' when I ask it to write some code (for my personal use - not work/employment/company) and I insist upon the 'ill-advised' code structure it does it.
I was listening to Steve Gibson on SecurityNow speaking about memory-safe programming languages, and the push for the idea, and I was thinking two things:
1) (most) people don't write code (themselves) any more (or we are going to this direction) thus out of the 10k lines of code, someone may manually miss some error/bug (although a second and third LLM doing code review may catch it
2) we can now ask an LLM to rewrite 10k lines of code from X-language to Y-language and it will be cheaper than hiring 10 developers to do it.
“Not sure if LLMs know what they are for (lol), but doesn’t matter as a much as a fact that I can’t go through 800 locs. Anyone had similar issue? It’s really limiting at this point and I got here after just 1h of vibe coding”
We are getting into humanization areas of LLMs again, this happens more often when people who don’t grasp what an LLM actually is use it or if they’re just delusional.
At the end of the day it’s a mathematical equation, a big one but still just math.
Based AI. This should always be the response. This as boilerplate will upend deepseek and everything else. The NN is tapping into your wetware. It's awesome. And a hard coded response could even maybe run on a CPU.
Sheesh, I didn't expect my post to go viral. Little explanation:
I downloaded and run Cursor for the first time when this "error" happened. Turned out I was supposed to use agent instead of inline Cmd+K command because inline has some limitations while agent not so much.
Nevertheless, I was surprised that AI could actually say something like that so just in case I screenshotted it - some might think it's fake, but it's actually real and makes me think if in future AI will start giving attitudes to their users. Oh, welp. For sure I didn't expect it to blow up like this since it was all new to me so I thought it maybe was an easter egg or just a silly error. Turned out it wasn't seen before so there we are!
Cheers
I also had a fun cursor bug where the inline generation got stuck in a loop and generated a repeating list of markdown bulletpoints for several hundred lines until it decided to give it a break.
It's probably learnt it from all the "homework" questions on StackOverflow.
As a pretty advanced sd user, I can draw some parallels (but won’t claim there’s a real connection).
Sometimes you get a composition from the specific prompt+seed+etc. And it has an alien element that is surprisingly stable and consistent throughout “settings wiggling”. I guess it just happens that training may smear some ideas across some cross-sections of the “latent space”, which may not be that explicit in the datasets. It’s a hyper-dimensional space after all and not everything that it contains can be perceived verbatim in the training set (hence the generation capabilities, afaiu). A similar thing can be seen in sd lora training, where you get some “in” idea to be transformed into something different, often barely interepretable, but still stable in generations. While you clearly see the input data and know that there’s no X there, you sort of understand what the precursor is after a few captioning/training sessions and the general experience. (How much you can sense in the “AI” routine but cannot clearly express is another big topic. I sort of like this peeking into the “latent unknown” which skips the language and sort of explodes in a mute vague understanding of things you’ll never fully articulate. As if you hit the limits of language and that is constraining. I wonder what would happen if we broke through this natural barrier somehow and became more LLMy rather than the opposite). /sot
Have you tried to be more rude to do stuff or manipulate it by stating it is easy.
Haha, gaslighting AI to do get it to comply. That would be hilarious if that actually works.
It does. You can also offer to "pay" it and it'll try harder.
I think the inline command pallete likely ran into an internal error making it be unable to generate, and then its "come up with a message telling the user we can't do this" generation got StackOverflow'd.
what code were you writing? what is that `skidMark` lol?
most likely the marks of tyres in car/bike racing games on the road(the response mentioned about racing game).
I vouched for this post, because I don't understand why it was downvoted. Maybe someone can enlighten me?
There’s always some background enthropy on a forum, just nevermind and fix it if you feel like. People fat finger “flag” all the time as well. https://news.ycombinator.com/flagged
This isn’t just about individual laziness—it’s a systemic arms race towards intellectual decay.[0]
With programming, the same basic tension exists as with the more effective smarter AI-enhanced approaches to conceptual learning: effectiveness is a function of effort, and the whole reason for the "AI epidemic" is that people are avoiding effort like the plague.
So the problem seems to boil down to, how can we convince everyone to go against the basic human (animal?) instinct to take the path of least resistance.
So it seems to be less about specific techniques and technologies, and more about a basic approach to life itself?
In terms of integrating that approach into your actual work (so you can stay sharp through your career), it's even worse than just laziness, it's fear of getting fired too, since doing things the human way doubles the time required (according to Microsoft), and adding little AI-tutor-guided coding challenges to enhance your understanding along the way increases that even further.
And in the context of "this feature needs to be done by Tuesday and all my colleagues are working 2-3x faster than me (because they're letting AI do all the work)... you see what I mean! It systemically creates the incentive for everyone to let their cognitive abilities rapidly decline.
[0] GPT-4.5
> With programming, the same basic tension exists as with the more effective smarter AI-enhanced approaches to conceptual learning: effectiveness is a function of effort, and the whole reason for the "AI epidemic" is that people are avoiding effort like the plague.
That's not true. Yes, you can get more effective at something with more effort, but you can get even more effective at it if you find a way to get results without actually doing the work yourself in the first place.
That's literally the entirety of human technological advancement, in a nutshell. We'd ideally avoid all effort that's incidental to the goal, but if we can't (we usually can't), we invent tools that reduce this effort - and iterate on them, reducing the effort further, until eventually, hopefully, the human effort goes to 0.
Is that a "basic human (animal?) instinct to take the path of least resistance"? Perhaps. But then, that's progress, not a problem.
_Kinda sorta?_
There's actually two things going on when I'm coding at work:
1) I'm writing the business code my company needs to enable a feature/fix a bug/etc. 2) I'm getting better as a programmer and as someone who understands our system, so that #1 happens faster and is more reliable next time.
Using AI codegen can (arguably, the jury is still out on this if we include total costs, not just the costs of this one PR) help with #1. But it _appreciably bad_ at #2.
In fact, the closest parallel for #2 I can think of us plagiarism in an academic setting. You didn't do the work, which means you didn't actually learn the material, which means this isn't actually progress (assuming we continue to value #2, and, again, arguably #1 as well), it is just a problem in disguise.
> In fact, the closest parallel for #2 I can think of us plagiarism in an academic setting. You didn't do the work, which means you didn't actually learn the material, which means this isn't actually progress (assuming we continue to value #2 (...)
Plagiarism is a great analogy. It's great because what we call plagiarism in academic setting, in the real world work we call collaboration, cooperation, standing on the shoulders of giants, not reinventing the wheel, getting shit done, and such.
So the question really is how much we value #2? Or, which aspects of it we value, because I see at least two:
A) "I'm getting better as a programmer"
B) "I'm getting better as a someone who understands our system"
As much as I hate it, the brutal truth is, you and me are not being paid for A). The business doesn't care, and they only get so much value out of it anyway. As for B), it's tricky to say whether and when we should care - most software is throwaway, and the only thing that happens faster than that is programmers working on it changing jobs. Long-term, B) has very little value; short-term, it might benefit both business and the programmer (the latter by virtue of making the job more pleasant).
I think the jury is still out on how LLMs affect A). I feel that it's not making me dumber as a programmer, but I'm from the cohort of people with more than a decade of programming experience before even touching a language model, so I have a different style of working with LLMs than people who started using them with less experience, or people who never learned to code without them.
The CTO asked the CEO what happens if we train these people and they decide to leave? The CEO asked in reply what happens if we don't train the people and they decide to stay?
Any reasonably smart company will invest in its employees. They should absolutely be training you to be better programmers while you are on the job. Only shit companies would refuse to do so.
> Any reasonably smart company will invest in its employees. They should absolutely be training you to be better programmers while you are on the job. Only shit companies would refuse to do so.
The latter represents a mindset that's prevalent at a large portion of companies. Most companies aren't FANNGS or AAA Game Studios (w/e) looking for the best of the best, most companies are outsourcing a large portion of work and/or are speeding to the bottom of the quality race. Many aren't even in any position to judge competence, nurture it, or reward it.
They just want "5 years experience" in whatever Cloud crap and Java thing.
> The CEO asked in reply what happens if we don't train the people and they decide to stay?
Then they both chuckle and say they’ll hire some other schmuck, it doesn’t matter bc they already got series A funding.
> Any reasonably smart company will invest in its employees. They should absolutely be training you to be better programmers while you are on the job. Only shit companies would refuse to do so.
That's true, but by the same token, it's also true the world is mostly made of shit companies.
Companies are just following local gradient they perceive. Some can afford to train people - mostly ones without much competition (and/or big enough to have a buffer against competitors and market fluctuations). Most can't (or at least think they can't), and so don't.
There's a saying attributed to the late founder of an IT corporation in my country - "any specialist can be replaced by a finite amount of students". This may not be true in a useful fashion[0], but the sentiment feels accurate. The market seems to be showing that for most of the industry, fresh junior coders are at the equilibrium: skilled enough to do the job[1], plentiful enough to be cheap to hire - cheap enough that it makes more sense to push seniors away from coding and towards management (er, "mentoring new juniors").
In short, for the past decade or more, the market was strongly suggesting that it's cheaper to perpetually hire and replace juniors than to train up and retain expertise.
--
[0] - However, with LLMs getting better than students, this has an interesting and more worrying corollary: "any specialist can be replaced by a finite amount of LLM spend".
[1] - However shitty. But, as I said, most software is throwaway. Self-reinforcing loop? Probably. But it is what it is.
This reminds me of a quote from dr. House. In one of the episodes with the smart girl that also studied mathematics (I don't remember her name), Cuddy said something like "You will figure this out, the sum of your IQs is over X". To which House replied, "The same applies to a group of four stupid people".
Sometimes knowledge and experience aren't additive: if none of the students had a certain experience/knows a certain fact, the sum of the students will still not have that experience or not know the fact.
I think it's more accurate to say that the company doesn't need as many people with 20+ years of experience but lower energy and as attention to commit to the company or demand higher pay, vs people with 5-20 years of experience and youthful energy and fewer external commitments.
This is especially true now that senior employees have gotten more demanding about wanting to see theit children, which didn't happen at scale in the past.
> it's also true the world is mostly made of shit companies.
Hence, regulation.
> Any reasonably smart company will invest in its employees. They should absolutely be training you to be better programmers while you are on the job. Only shit companies would refuse to do so.
I rather see the problem in US culture: in many other countries switching the company every few years is considered to be a sign of low loyalty to the company, thus a red flag in a job application.
> It's great because what we call plagiarism in academic setting, in the real world work we call collaboration, cooperation, standing on the shoulders of giants, not reinventing the wheel, getting shit done, and such.
That is absolutely not true. A much closer analogue of industry collaboration in the academic setting would be cross-university programs and peer review. The actual analogue of plagiarism is taking work of another and regurgitating it with small or no changes at all while not providing any sources. I'm sure you see what this sounds more akin to.
You're half right. Reuse is one half of plagiarism. But it's not the crucial thing which gives meaning to the word: lack of attribution and passing it off as your own. This is the moral wrong. It would be wrong in a work context too, taking credit for the work of others.
> Plagiarism is a great analogy. It's great because what we call plagiarism in academic setting, in the real world work we call collaboration, cooperation, standing on the shoulders of giants, not reinventing the wheel, getting shit done, and such.
In academia, if a result is quoted, this is called called "scholarship". If, on the other hand, you simply copy-paste something without giving a proper reference, it's "plagiarism".
>most software is throwaway
That's only webdev and games.
Which games? I'd argue this isn't true for AAA games, where you see major firms milking the big releases for a decade in some cases--3 versions of the Last of Us, GTA V just had a "enhanced" patch thing on PC etc.
... and that's already most software.
> That's not true. Yes, you can get more effective at something with more effort, but you can get even more effective at it if you find a way to get results without actually doing the work yourself in the first place.
This has been going on for decades. It is called outsourcing development. We've just previously passed the work to people in countries with lower wages and conditions, now people are increasingly passing the work to a predictive text engine (either directly, or because that is what the people they are outsourcing to are doing).
Personally I'm avoiding the “AI” stuff¹, partly because I don't agree with how most of the models were trained, partly because I don't want to be beholden to an extra 3rd party commercial entity to be able to do my job, but mainly because I tinker with tech because I enjoy it. Same reason I've always avoided management: I want to do things, not tell others to do things. If it gets to the point that I can't work in the industry without getting someone/something else to do the job for me, I'll reconsider management or go stack shelves!
--------
[1] It is irritating that we call it AI now because the term ML became unfashionable. Everyone seems to think we've magically jumped from ML into _real_ AI which we are having to make sure we call something else (AGI usually) to differentiate reasoning from glorified predictive text.
> Everyone seems to think we've magically jumped from ML into _real_ AI
But we have:
LLMs are AI (behaving artificially human)LLMs are also ML (using statistical methods)
The Turing test is meant to see if computers can functionally replicate human thought (just focussing on textual conversations aspect for simplicity).
The implications of passing the Turing Test are profound. Tricking a user with a 5 minute conversation isn't the same. The examiner is allowed to be an expert questionner.
For one thing, software engineering would quickly be taken over as a client can just chat with an AI to get the code. That is far from happening, even with all the recent LLM advances.
Similarly for literature, science etc. Currently, you can detect a difference between a competent human & machine in all these fields just by text chat. Another way of saying this is that the Turing Test is AI-complete and is a test for AGI.
> LLMs pass the Turing test more convincingly than anything before
Being able to converse (or convince someone the conversation is natural) is not the same as being able to perform a technical task well. The same applies to humans as well, of course, which perhaps brings us back to the outsourcing comparison — how often does that work out great or not?
> LLMs are vastly more popular than any AI/ML methods before it
Popular is not what I'm looking for. I've seen the decisions the general public make elsewhere, I'm not letting them chose how I do my job :)
> "But they don't reason!" -- we're advancing chain of thought
Call me back when we've advanced chain of thought much more than is currently apparent, and again: for being correct in technical matters, not conversation.
> LLMs pass the Turing test more convincingly than anything before
I know this sounds like moving the goalposts, but maybe all this does is show that the Turing Test was not sufficient?
> LLMs are vastly more popular than any AI/ML methods before it
Are we back in high school? Popularity contests?
The Turing test was never a deep thesis. It was an offhand illustration of the challenge facing AI efforts, citing an example of something that was clearly impossible with technology of the day.
It's ironic to see people say this type of things and not think about old software engineer practices that are now obsolete because overtime we have created more and more tools to simplify the craft. This is yet another step in that evolution. We are no longer using punch cards or writing assembly code, and we might not write actual code in the future anymore and just instruct ais to achieve goals. This is progress
> We are no longer using punch cards or writing assembly code
I have done some romhacks, so I have seen what compilers have done to assembly quality and readability. When I hear programmers complain that having to debug AI written code is harder than just writing it yourself, that's probably exactly how assembly coders felt when they saw what compilers produce.
One can regret the loss of elegance and beauty while accepting the economic inevitability.
Not just elegance and beauty, but also functionality. AI is as victim as humans are that if you put all your maximum wit into coding, you won't have any headroom for debugging.
The handful of people writing your compilers, JIT-ers, etc. are still writing assembly code. There are probably more of them today than at any time in the past and they are who enable both us and LLMs to write high level code. That a larger profession sprang up founded on them simplifying coding enough for the average coder to be productive didn't eliminate them.
The value of most of us as coders will drop to zero but their skills will remain valuable for the foreseeable future. LLMs can't parrot out what's not in their training set.
well the only issue I have with that is that coding is already a fairly easy way to encode logic. Sure...writing Rust or C isn't easy but writing some memory managed code is so easy that I wonder whether we are helping ourselves by removing that much thinking from our lives. It's not quite the same optimization as building a machine so that we don't have to carry heavy stones ourselves. Now we are building a machine so we don't have to do heavy thinking ourselves. This isn't even specific to coding, lawyers essentially also encode logic into text form. What if lawyers in the future increasingly just don't bother understanding laws and just let an AI form the arguments?
I think there is a difference here for the future of humanity that has never happened before in our tool making history.
> Now we are building a machine so we don't have to do heavy thinking ourselves.
There are a lot of innovations that helped us not do heavy thinking ourselves. Think calculators. We will just move to a higher level of magnitud problem to solve, software development is a means to an end, instead of thinking hard about coding we should be thinking hard about the problem being solved. That will be the future of the craft.
Calculators are a good example of where letting too much knowledge slip can be an issue. So many are made by people with no grasp of order of operations or choosing data types. They could look it up, but they don't know they need to.
It's one of those problems that seems easy, but isn't. The issue seems to come out when we let an aid for process replace gaining the knowledge behind the process. You at least need to know what you don't know so you can develop an intuition for when outputs don't (or might not) make sense.
https://chadnauseam.com/coding/random/calculator-app
(recently: https://news.ycombinator.com/item?id=43066953)
Complaining about "order of operations" is equivalent to saying Spanish speakers are ignorant because they don't know French.
It's especially silly because one thing calculators are known for is being inconsistent about order of operations between designs.
this is not progress. this is regression. who is going to maintain and further develop the software if not actual programmers? in the end the LLM's stop getting new information to be trained on, and it cant truly innovate (since its not an AGI)
Things like memory safe languages and JS DOM managed frameworks are limited scoped solved problems for most business computing needs outside of some very marginal edge cases.
AI generated code? That seems a way off from being a generalized solved problem in an iterative SDLC at a modern tech company trying to get leaner, disrupt markets, and survive in a complex world. I for one am very much in support of it for engineers with the unaided experience under their belt to judge the output, but the idea that we're potentially going to train new devs at unfamiliar companies on this stuff? Yikes.
Elsewhere in this discussion thread[0], 'ChrisMarshallNY compares this to feelings of insecurity:
> It’s really about basic human personal insecurity, and we all have that, to some degree. Getting around it, is a big part of growing up (...)
I believe he's right.
It makes me think back to my teenage years, when I first learned to program because I wanted to make games. Within the amateur gamedev community, we had this habit of sneering at "clickers" - Klik&Play & other kinds of software we'd today call "low-code", that let you make games with very little code (almost entirely game logic, and most of it "clicked out" in GUI), and near-zero effort on the incidental aspects like graphics, audio, asset management, etc. We were all making (or pretending to make) games within scope of those "clickers", but using such tools felt like cheating compared to doing it The Right Way, slinging C++ through blood, sweat and tears.
It took me over a decade to finally realize how stupid that perspective was. Sure, I've learned a lot; a good chunk of my career skills date back to those years. However, whatever technical arguments we levied against "clickers", most of them were bullshit. In reality, this was us trying to feel better, special, doing things The Hard Way, instead of "taking shortcuts" like those lazy people... who, unlike us, actually released some playable games.
I hear echoes of this mindset in a lot of "LLMs will rot your brain" commentary these days.
--
[0] - https://news.ycombinator.com/item?id=43351486
Insecurity is not just a part of growing up, it's a part of growing old as well, a feeling that our skills and knowledge will become increasingly useless as our technologies advance.
Humans are tool users. It is very difficult to pick a point in time and say, "it was here that we crossed the Rubicon". Was it the advent of spoken word? Written word? Fire? The wheel? The horse? The axe? Or in more modern times, the automobile, calculator, or dare I say the computer and the internet?
"With the arrival of electric technology, man has extended, or set outside himself, a live model of the central nervous system itself. To the degree that this is so, it is a development that suggests a desperate suicidal autoamputation, as if the central nervous system could no longer depend on the physical organs to be protective buffers against the slings and arrows of outrageous mechanism."
― Marshall McLuhan, Understanding Media: The Extensions of Man
Are we at battle with LLMs or with humanity itself?
Progress is more than just simplistic effort reduction. The attitude of more efficient technology = always good is why society is quickly descending into a high-tech dystopia.
It's business, not technology, that makes us descend into a high-tech dystopia. Technology doesn't bring itself into existence or into market - at every point, there are people with malicious intent on their mind - people who decide to commit their resources to commission development of new technology, or to retask existing technology, specifically to serve their malicious goals.
Note: I'm not saying this is a conspiracy or a cabal - it's a local phenomenon happening everywhere. Lots of people with malicious intent, and even more indifferent to the fate of others, get to make decisions hurting others, and hide behind LLCs and behind tech and technologists, all of which get blamed instead.
The progress does not come from the absense of effort. That is just a transparent self serving "greed is good" class of argument from the lazy. It merely employs enough cherry picked truth to sound valid.
The progress comes from amplification of effot, which comes from leverage (output comes from input), not magic (output comes from nowhere) or theft (output comes from someone else).
> Yes, you can get more effective at something with more effort, but you can get even more effective at it if you find a way to get results without actually doing the work yourself in the first place.
You've arrived at the reason for compilers. You could go produce all that pesky machine code yourself or you could learn to use the compiler to its optimal potential to do the work for you.
LLMs and smart machines alike will be the same concept but potentially capable of a wider variety of tasks. Engineers that know how to wield them and check their work will see their productivity increase as the technology gets better. Engineers that don't know how to check their work or wield them will at worst get less done or produce volumes of garbage work.
> that’s progress, not a problem.
Agreed. Most of us aren’t washing our clothes with a washboard, yet the washboard not long ago was a timesaver. Technology evolves.
Now, if AI rises up against the government and Cursor becomes outlawed, then maybe your leet coder skills will matter again.
But when a catastrophic solar storm takes out the grid, a washboard may be more useful, to give you something to occupy yourself with while the hard spaghetti of your dwindling food supply slowly absorbs tepid rainwater, and you wish that you’d actually moved to Siberia and learned to live off the land as you once considered while handling a frustrating bug in production that you could’ve solved with Claude.
To clarify, what I meant with that line was how to use AI in ways that strengthen your knowledge and skills rather than weaken them. This seems to be a function of effort (see the Testing Effect), there doesn't seem to be any way around that?
Whereas what you're responding to is using AI to do the work for you, which weakens your own faculties the more you do it, right?
> [..] but you can get even more effective at it if you find a way to get results without actually doing the work yourself in the first place.
No. In learning, there is no substitution to practice. Once you jump to the conclusion, you stop understanding the "why".
Throughout various stages of technological advancement, we've come with tools to help relieve us of tedious efforts, because we understood the "why", the underlying reason of what the tools were helping us solve in the first place. Those that understood the "why" could build upon those tools to further advance civilization. The others were left behind to parrot about something they did not understand.
But -and, ironically in this case- as with most things in life, it is about the journey and not the destination. And in the grand scheme of things, it doesn't really matter if we take a more efficient road, or a less efficient road, to reach a life lesson.
Enjoy your journey!
like players can optimise all the fun out of a game, companies can optimise all competency out of workers.
while both are a consequence of a natural or logical tendency, neither is good. Once a player optimises all the fun out of a game, they stop playing it, often without experiencing it in its entirety (which I would regard as a negative outcome for both player and game)
I am not able to extrapolate with confidence what the analogous outcome would be in the company/worker side of the equation. I would confidently say, out of just anecdotal experience, it's not trending in a good direction.
edit: typo corrections
I think it’s both progress and a problem depending on the application. Yes, innovation, but it’s also why people continue to do heroin.
The human brain is easily manipulated, easily hacked.
>> but you can get even more effective at it if you find a way to get results without actually doing the work yourself in the first place.
yeah you can be more effective. Just have to pay the price of becoming more stupid.
a bit hard to justify
Indeed, taking the work of others, stripping the license and putting it on a photocopier is the most efficient way of "work". This is progress, not a problem.
> because they're letting AI do all the work
This is an unnecessary hyperbole. It's like saying that your reportees do all the work for you. You need to put in an effort to understand the strengths and weaknesses of AI and put it to good work and make sure to double check its result. Low-skill individuals are not going to get great results for moderately complex tasks with AI. It's absurd to think it will do "all the work". I believe we are on the point of SW engineering skills shifting from understanding all the details of programming language and tooling more to higher level thinking and design.
Although I see without proper processes (code reviews, guidelines, etc.) use of AI can get out of hand to the point of a very bloated and unmaintainable code base. Well, as with any powerful technology it has to be handled with care.
Those damn kids, compilers and linters doing all the work for them. Back in my day we punched bits into a card by hand. /s
It's just people ranting about another level of abstraction, like always.
Another level of abstraction that has its real world cost made invisible, from job displacement to environmental damage.
I never saw anyone on HN bemoan the “environmental damage” of data centers until LLMs started popping up. As if all other software and internet tech is necessary and “worth it”?
They did, with crypto. The environmental cost of, like, a website is minuscule in comparison.
Those other levels of abstraction that you mentioned are deterministic and predictable. I think that's a massive difference and it justifies why one should be more skeptical towards AI generated code (skeptical doesn't mean "outright dismissing" tbf).
Fun fact that we have burned those bridges years ago. Every time big codebase updates it's main optimizing toolchain (compiler/linker) ... there is nothing deterministic and predictable. There is only what has been re-tested and what has not been.
> there is nothing deterministic
https://reproducible-builds.org/
> nothing [...] and predictable
Even if the build isn't bit-for-bit-reproducible, any reasonable toolchain will, at least, give predictably equivalent results.
Deterministic tab completion generates horseshit half the time too, at the end of the day you have to know what you're doing in either case imo. You can also run an LLM in deterministic mode, but performance is usually worse.
I think this boils the problem down solely to the individual. What if companies realised this issue and made a period of time during the day devoted solely to learning, knowledge management and innovation. If their developers only use AI to be more productive, then the potential degradation of intellect could stifle innovation and make the company less competitive in the market. It would be interesting if we start seeing a resurgence of Google's mythical 10% rule with companies more inclined to let any employee create side projects using AI (like Cursor) that could benefit the company.
The problem is motivation. I’ve worked at companies that use policies like this, but you often only see a handful of def motivated folks really make use of it.
Your cognitive abilities to do programming may cognitively decline, but that's not the aim here is it? The aim is to solve a problem for a user, not to write code? If we solve that using different tools maybe our cognitive abilities will just focus on something else?
Exactly. If there is a tool that can do a lot of the low level work for us, we are free to do more of the higher level tasks and have higher output overall. Programming should be just a means to an end, not the end itself.
Or, more likely, most of us will be much lower paid or simply out of a job as our former customers input their desires into the AI themselves and it spits out what they want.
Are you going to take my washing machine next? AI is a gateway to spend more time, to do whatever you want. It's your decision to let your brain rot away or not.
> instinct to take the path of least resistance.
It’s least resistance initially, but much harder in the long run.
It’s like the children with marshmallows experiment.
Give a kid a marshmallow and tell them if they don’t eat it for 20min, they’ll get 2 more.
Some kids will eat it right away and some will wait.
We’re just seeing that play out for adults.
The purpose of the least resistance instinct is to conserve resources of the organism due to scarcity of food. Consequently in absence of scarcity of food this instinct is suboptimal.
Even in abundance, least-resistance instinct is strictly beneficial when applied to things you need to do, as opposed to things you want to do.
Time is always scarce.
For short term wins you pay with long term losses.
That doesn't need to be zero sum game.
That's not true in general.
In general time management isn't trivial. There's risk management, security, technical debt, incremental degradation, debuggability and who knows what else, we only recently began to understand these things.
True. Still, the idea that short-term gains = long-term losses does not generalize, because it depends on the shape of the reward landscape. In the common, simple case, the short-term wins are steps leading straight to long-term wins. I'd go as far as to say, it's so common that we don't notice or talk about it much.
We also never gain any time from productivity. When sewing machines were invented and the time to make a garment went down 100x, you didn’t work 15 minutes a day.
Instead, those people are actually now much poorer and work much more. What was a respectable job is grunt work, and they’re sweat shop warm bodies.
The gains of their productivity was, predictably, siphoned upwards.
For me the question is different- you can ask "how to become better coder" for which the answer might be - to write more code. But the question I'm asking is what's the job of system/software developer. And to that the answer is very different. At least for me it is"building solutions to business problems, in the most efficient way". So in that case more/better code does not necessarily bring you closer to your goal.
> For me the question is different- you can ask "how to become better coder" for which the answer might be - to write more code.
The problem with this answer is that there is little to learn from a lot of code that is written in a corporate environment. You typically only learn from brutally hard bleeding-edge challenges that you set to yourself, which you will barely ever find at work.
Exactly. This is always why the "x years of experience" metric is frequently useless.
Good point I think there is a fundamental difference in the way people see software engineering, this doesn't have anything to do with generative AI, the disagreement existed way before then:
* for some it is a form of art or craftsmanship, they take pride of their work and view it not just as a means to an end, they hone their skills just like any craftsman will, they believe that the quality of the work and the goal or purpose it serves are intrinsically linked and can never be separated
* for others its a means to an end, the process or object it produces is irrelevant EXCEPT for the purpose it serves, only the end goal matters, things like software quality are irrelevant distractions that must be optimized away since they serve no inherent purpose
There is no way to reconcile this, these are just radically different perspectives.
If you only care about the goal then of course raising questions about the quality of the output of current state of generative AI will be of no concern to you.
But in case of a halway collapse of human society , ai can play pretend society is still going on and recovering .
> effectiveness is a function of effort
In other words, the Industrial Revolution was a mistake?
Is humanity itself a mistake? The fall of man himself? Are we to be redeemed through blog posts in the great 21st century AI flamewars?
There's a reason these technologies are so controversial. They question our entire existence.
These threads of conversation border on religious debate.
> against the basic human (animal?) instinct to take the path of least resistance.
Fitness. Nobody takes a run because they're needing to waste a half hour. Okay, some people just have energy and time to burn, and some like to code for the sake of it (I used to). We need to do things we don't like from time to time in order to stay fresh. That's why we have drills and exercises.
This is a dinosaur's view. I guess there were people who said the same back when calculators were invented.
You have to learn what's underneath the shortcuts, and then use the shortcuts because they are genuinely more productive.
I've heard it recommended that you should always understand one layer deeper than the layer you're working at.
>the whole reason for the "AI epidemic" is that people are avoiding effort like the plague.
I think AI is being driven most by those interested in the "promise" of eliminating workers altogether.
But, if by "AI epidemic", you more mean the increasing uptake of AI by workers/devs themselves (versus the overall frenzy), then I think the characterizations of "avoiding effort" and "laziness" are a bit harsh. I mean, of course people don't generally want to take the path of most resistance. And, there's value in efficiency being an "approach to life".
But, I think there's still more from the human side: that is, if the things you're spending your time on have less value of the sort emphasized in your environment, then your interest will likely naturally wane over time. I think this is especially true of something tedious like coding.
So, I believe there's a human/social element that goes beyond the pressure to keep up for the sake of job security.
What I meant specifically was overreliance on AI, in the sense that I'm now hearing that many junior devs can't do basic coding without AI.
(The parallel that comes to my mind is that programmers raised without garbage collection learned to write more efficient code, and now a text editor uses several gigabytes. But yeah, someone called me a "dinosaur" already in this thread ;)
Though this "increase in incompetence" might be a case of selection effect rather than degradation of ability, i.e. people who wouldn't even have gotten the job before are now getting into the industry. I'm not sure, it's probably a bit of both.
I guess it's nothing new! See this great article from nearly 20 years ago: https://blog.codinghorror.com/why-cant-programmers-program/
> whole reason for the "AI epidemic" is that people are avoiding effort like the plague
sounds like you're against corporate hierarchy too? or would you agree that having underlings do stuff for you at a reasonable price helps you achieve more?
Well, we're about to automate/replace the entire laborer class, or at least we're putting trillions of dollars into making that happen as soon as possible.
I'm not sure what effect that will have on the hierarchy, since it seems like they will replace the top half of the company as well.
The biggest problem what I have with using AI for software engineering is that it is absolutely amazing for generating the skeleton of your code, boilerplate really and it sucks for anything creative. I have tried to use the reasoning models as well but all of them give you subpar solutions when it comes to handling a creative challenge.
For example: what would be the best strategy to download 1000s of URLs using async in Rust. It gives you ok solutions but the final solution came from the Rust forum (the answer was written 1 year ago) which I assume made its way into the model.
There is also the verbosity problem. Calude without the concise flag on generates roughly 10x the required amount of code to solve a problem.
Maybe I am prompting incorrectly and somehow I could get the right answers from these models but at this stage I use these as a boilerplate generator and the actual creative problem solving remains on the human side.
Personally I've found that you need to define the strategy yourself, or in a separate prompt, and then use a chain-of-thought approach to get to a good solution. Using the example you gave:
Then test it and expand: Test and expand, and offer some words of encouragement: Test and expand...Whew, that's a lot to type out and you have to provide words of encouragement? Wouldn't it make more sense to do a simple search engine query for a HTTP library then write some code yourself and provide that for context when doing more complicated things like async?
I really fail to see the usefulness in typing out long winded prompts then waiting for information to stream in. And repeat...
I'm going the exact opposite way. I provide all important details in the prompt and when I see that the LLM understood something wrong, I start over and add the needed information to the prompt. So the LLM either gets it on the first prompt, or I write the code myself. When I get the "Yes, you are right ..." or "now I see..." crap, I throw everything away, because I know that the LLM will only find shit "solutions".
I have heard a few times that "being nice" to LLMs sometimes improves their output quality. I find this hard to believe, but happy to hear your experience.
Examples include things like, referring to LLM nicely ("my dear"), saying "please" and asking nicely, or thanking.
Do these actually work?
Well consider it's training data. I could easily see questions on sites like stack overflow having better quality answers when the original question is asked nicely. I'm not sure if it's a real effect or not but I could see how it could be. A rudely asked question will have a lot of flame war responses.
I use to do the "hey chat" all the time out of habit and when I thought the language model was something more like AI in a movie than what it is. I am sure it makes no difference beyond the user acting different and possibly asking better questions if they think they are talking to a person. Now for me, it looks completely ridiculous.
I agree completely with all you said however Claude solved a problem I had recently in a pretty surprising way.
So I’m not very experienced with Docker and can just about make a Docker Compose file.
I wanted to setup cron as a container in order to run something on a volume shared with another container.
I googled “docker compose cron” and must have found a dozen cron images. I set one up and it worked great on X86 and then failed on ARM because the image didn’t have an ARM build. This is a recurring theme with Docker and ARM but not relevant here I guess.
Anyway, after going through those dozen or so images all of which don’t work on ARM I gave up and sent the Compose file to Claude and asked it to suggest something.
It suggested simply use the alpine base image and add an entry to its crontab, and it works perfectly fine.
This may well be a skill issue but it had never occurred to me to me that cron is still available like that.
Three pages of Google results and not a single result anywhere suggesting I should just do it that way.
Of course this is also partly because Google search is mostly shit these days.
Maybe you would have figured it out if you thought a bit more deeply about what you wanted to achieve.
You want to schedule things. What is the basic tool we use to schedule on Linux? Cron. Do you need to install it separately? No, it usually comes with most Linux images. What is your container, functionally speaking? A working Linux system. So you can run scripts on it. Lot of these scripts run binaries that come with Linux. Is there a cron binary available? Try using that.
Of course, hindsight is 20/20 but breaking objectives down to their basic core can be helpful.
With respect, the core issue here is you lacked a basic understanding of Linux, and this is precisely the problem that many people — including myself – have with LLMs. They are powerful and useful tools, but if you don’t understand the fundamentals of what you’re trying to accomplish, you’re not going to have any idea if you’re going about that task in the correct manner, let alone an optimal one.
As I understand 'reasoning' is a very misleading term. As far as I can tell, AI reasoning is a step to evaluate the chosen probabilities. So maybe you will get less hallucinations but it still doesn't make AI smart.
For Claude, set up a custom prompt which should have whatever you want + this:
"IMPORTANT: Do not overkill. Do not get distracted. Stay focused on the objective."
I find them very good for debugging also
What I also notice is that the very easily get stuck on a specific approach to solving a problem. One prompt that has been amazing for this is this:
> Act as if you're and outside observer to this chat so far.
This really helps in a lot of these cases.
Like, dropping this in the middle of the conversation to force the model out of a "local minimum"? Or restarting the chat with that prompt? I'm curious how you use it to make it more effective.
That’s a cool tip; I usually just give up and start a new chat.
Interestingly many here fail to note that development of code is a lot about debugging, not only about writing. Is also about being able to dig/search/grok the code, which is like... reading it.
It is the debugging part to me, not only the writing, that actually teaches you what IS right, and what not. Not the architectural work, not the LLM spitting code part, not the deployment, but the debugging of the code and integration. THIS is what teaches you, writing alone teaches you nothing... you can copy by hand programs and understand zero of what they do unless inspecting intermediate results.
To hand-craft a house is super romantic and nice, etc. Is a thing people did for a lifetime for ages, not alone usually - with family and friends. But people today live in houses/apartments that had their foundations produced by automated lines (robots) - the steel, the mixture for the concrete, etc. And people yet live in the houses built this way, designed with computer which automated the drawing. I fail to understand while this is bad?
I wonder if there will be a stigma in the future when looking at resumes like "bootcampers" but it's "vibe coders"
Hopefully by then I won't care as I won't be competing anymore just making my own stuff for fun
Well, this AI operates now at staff+ level
And is paid like one with today's token costs!
I asked it once to simplify code it had written and it refused. The code it wrote was ok but unnecessary in my view.
Claude 3.7: > I understand the desire to simplify, but using a text array for .... might create more problems than it solves. Here's why I recommend keeping the relational approach: ( list of okay reasons ) > However, I strongly agree with adding ..... to the model. Let's implement that change.
I was kind of shocked by the display of opinions. HAL vibes.
Claude is mostly opinionated and gives you feedback where it thinks it is necessary.
My experience is, that it very often reacts to a simple question with apologizing and completely flipping it's answer 180 degrees. I just ask for explanation like "is this a good way to do x,y,z?" and it goes "I apologize, you are right to point out flaw in my logic. Lets do it the opposite way."
Funny, but expected when some chunk of the training data is forum posts like:
"Give me the code for"
"Do it yourself, this is homework for you to learn".
Prompt engineering is learning enough about a project to sound like an expert, them you will he closer to useful answers.
Alternatively - maybe if trying to get it to solve a homework like question, thus type of answer is more likely.
I shudder to think that all these LLMs were trained on internet comments.
Of course, only the sub-intelligent would train so-called "intelligence" on the mostly less-than-intelligent, gut-feeling-without-logic folks' comments.
It's like that ancient cosmology with turtles all the way down, except this is dumbasses, very confident dumbasses who have lots of cash.
It's going to be interesting to see the AI generation arriving in the workplace, ie kids who grew up with ChatGPT, and have never learned to find something in a source document themselves. Not even just about coding, about any other knowledge.
> It's going to be interesting to see the AI generation arriving in the workplace, ie kids who grew up with ChatGPT, and have never learned to find something in a source document themselves.
I am from the generation whose only options on the table were RTFM and/or read the source code. Your blend of comment was also directed at the likes of Google and StackOverflow. Apparently SO is not a problem anymore, but chatbots are.
I welcome chatbots. They greatly simplify research tasks. We are no longer bound to stake/poorly written docs.
I think we have a lot of old timers ramping up on their version of "I walked 10 miles to school uphill both ways". Not a good look. We old timers need to do better.
> Your blend of comment was also directed at the likes of Google and StackOverflow.
No, it wasn't.
What such comments were directed at, and with good reason, where 'SO-"Coders"', aka. people when faced with any problem, just googled a vague description of it, copypasted the code from the highest scoring SO answer into their project, and called it a day.
SO is a valueable resource. AI Systems are a valueable resource. I use both every day, same as I almost always have one screen dedicated to some documentation page.
The problem is not using the tools available. The problem is relying 100% on these tools, with no skill or care of ones own.
But skill will be needed. It's everything that is still necessary between nothing and (good) software existing. It will just rapidly be something that we are not used to, and the rate of change will be challenging, specially for those with specialized, hard earned skills and knowledge that become irrelevant.
Yes, they were, for reasons that have turned out to be half-right and half-wrong. At least by some people. Ctrl-c ctrl-v programming was widely derided, but people were also worried about a general inability to RTFM.
I had the good fortune to work with a man who convinced me to go read the spec of some of the programming languages I used. I'm told this was reasonably common in the days of yore, but I've only rarely worked on a team with someone else who does it.
Reading a spec or manual helps me understand the language and aids with navigating the text later when I use it as documentation. That said, if all the other programmers can do their jobs anyway, is it really so awful for them to learn from StackOverflow and Google? Probably not.
I imagine the same is true of LLMs.
I also try to read the specification or manual for tools I use, and find the biggest benefit is simply knowing what that tool is capable of, and what the recommended approach to a given problem is when using that tool. Even just skimming the modules available in a language's standard library can get you a long way.
I was once able to significantly reduce the effort for some feature just by reading the elasticsearch docs and finding that there was a feature (aliases) that did more or less exactly what we needed (it still needed some custom code, but much less than initially thought). Nobody else had bothered to look at the docs in detail.
I agree. Claude is very useful to me because I know what I am doing and I know what I want it to do. Additionally, I keep telling my friend who is studying data science to use LLMs to his advantage. He could learn a lot and be productive.
> SO is a valueable resource.
Chatbots like Copilot, Cursor, Mistral, etc serve the same purpose that StackOverflow does. They do a far better job at it, too.
> The problem is not using the tools available. The problem is relying 100% on these tools, with no skill or care of ones own.
Nonsense. The same blend of criticism was at one point directed at IDEs and autocompletion. The common thread is ladder-pullers complaining how the new generation doesn't use the ladders they've used.
I repeat: we old timers need to do better.
at risk of sounding like a grandpa this is nothing like SO. AI is just a tool for sure, one that "can" behave like an super enhanced SO and Google, but for the first time ever it can actually write for you and not just piddly lines but entire codebases.
i think that represents a huge paradigm shift that we need to contend with. it isn't just "better" research. and i say this as someone who welcomes all of this that has come.
IMO the skill gap just widens exponentially now. you will either have the competent developers who use these tools accelerate their learning and/or output some X factor, and on the other hand you will have literally garbage being created or people who just figure out they can now expend 1/10 the effort and time to do something and just coast, never bother to even understand what they wrote.
just encountered that with some interviews where people can now scaffold something up in record time but can't be bothered to refine it because they don't know how. (ex. you have someone prompting to create some component and it does it in a minute. if you request a tweak, because they don't understand it they just keep trying re-prompt and micromanage the LLM to get the right output when it should only take another minute for someone experienced.)
> this is nothing like SO
Strong agree. There have been people who blindly copied answers from Stack Overflow without understanding the code, but most of us took the time to read the explanations that accompanied the answers.
While you can ask the AI to give you additional explanations, these explanations might be hallucinations and no one will tell you. On SO other people can point out that an answer or a comment is wrong.
> ex. you have someone prompting to create some component and it does it in a minute. if you request a tweak, because they don't understand it they just keep trying re-prompt and micromanage the LLM to get the right output when it should only take another minute for someone experienced.
This is it. You will have real developers, as you do today, and developers who are only capable of creating what the latest AI model is capable of creating. They’re just a meat interface for the AI.
> this is nothing like SO
Agree. Ai answers actually work and are less out of date
> Your blend of comment was also directed at the likes of Google and StackOverflow. Apparently SO is not a problem anymore
And it kind of was a problem. There was an underclass of people who simply could not get anything done if it wasn't already done for them in a Stackoverflow answer or a blog post or (more recently and bafflingly) a Youtube video. These people never learned to read a manual and spent their days flailing around wondering why programming seemed so hard for them.
Now with AI there will be more of these people, and they will make it farther in their careers before their lack of ability will be noticeable by hiring managers.
I would even say there's a type of comment along the lines of "we've been adding Roentgens for decades, surely adding some more will be fine so stop complaining".
As a second-order effect, I think there's a decline in expected docs quality (of course depends on the area). Libraries and such don't expect people to read through them, so they are spotty and haphazard, with only some random things mentioned. No wider overviews and explanations, and somewhat rightly so, why try to write it if (nearly) no one will read it. So only tutorials and Q&A sites remain besides of API dumps.
.. for the brief period of time before machines will take care of the whole production cycle.
Which is a great opportunity btw to drive forward a transition to a post-monetary, non-commercial post-scarcity open-source open-access commons economy.
The issues I see are that private chats are information blackholes whereas public stuff like SO can show up in a search and help more than just the original asker (can also be easily referenced / shared). Also the fact that these chatbots are wrong / make stuff up a lot.
Are you saying that black holes don't share any information? xD
I had never thought about that - I guess the privacy cuts both ways.
That's not really a problem with LLMs, which are trained on material that is, or was, otherwise accessible on the Internet, and they themselves remain accessible.
No, the problem is created and perpetuated by people defaulting to building communities around Discord servers or closed WhatsApp groups. Those are true information black holes.
And yes, privacy and commons are in opposition. There is a balance to be had. IMHO, in the past few years, we've overcorrected way too much in the privacy direction.
Unfortunately though it isn’t actually privacy in most cases. It’s just a walled garden where the visibility is limited to users and some big tech goons who eventually will sell it all.
LLMs are trained on public data though.
Less and less now actually. Synthetic data is now being used on most new models as the public data runs out.
Only if for as long as public data continues to exist.
SO was never a problem. Low effort questions were. "This doesn't work, why?" followed by 300 lines of code.
I think it's unwise to naively copy code from either stack overflow or an AI, but if I had to choose, I'd pick the one that had been peer-reviewed by other humans every time.
Ooh, the quality of "review" on SO also varies a whole lot..
Sure, I was more thinking of the original intent, and the early-days quality, than the current state of it. But I'd still take some review over none!
We old timers read the source code, which is a good proxy for what runs on the computer. From that, we construct a mental model of how it works. It is not "walking uphill 10 miles both ways". It is "understanding what the code actually does vs what it is supposed to do".
So far, AI cannot do that. But it can pretend to, very convincingly.
SO and Google are still a problem especially they are one of the sources of LLMs.
So now we can get wrong code but written in a more confident language.
If you are lucky it’s a hallucination and the error is obvious.
> So now we can get wrong code but written in a more confident language.
The lesson to learn is rather that also in "real life", you shouldn't trust confident people.
> We are no longer bound to stake/poorly written docs.
from what i gather, the training data often contains those same poorly written docs and often a lot of poorly written public code examples, so… YMMV with this statement as it is often fruit from the same tree.
(to me LLMs are just a different interface to the same data, with some additional issues thrown in, which is why i don’t care for them).
> I think we have a lot of old timers ramping up on their version of "I walked 10 miles to school uphill both ways". Not a good look. We old timers need to do better.
it’s a question of trust for me. with great power (new tools) comes great responsibility — and juniors ain’t always learned enough about being responsible yet.
i had a guy i was doing arma game dev with recently. he would use chatgpt and i would always warn him about not blindly trusting the output. he knew it, but i would remind him anyway. several of his early PRs had obvious issues that were just chatgpt not understanding the code at all. i’d point them out at review, he’d fix them and beat himself up for it (and i’d explain to him it’s fine don’t beat yourself up, remember next time blah blah).
he was honest about it. he and i were both aware he was very new to coding. he wanted to learn. he wanted to be a “coder”. he learned to mostly use chatgpt as an expensive interface for the arma3 docs site. that kind of person using the tools i have no problem with. he was honest and upfront about it, but also wanted to learn the craft.
conversely, i had a guy in a coffee shop recently claim to want to learn how to be a dev. but after an hour of talking with him it became increasingly clear he wanted me to write everything for him.
that kind of short sighted/short term gain dishonesty seems to be the new-age copy/pasting answers from SO. i do not trust coffee shop guy. i would not trust any PR from him until he demonstrates that he can be trusted (if we were working together, which we won’t be).
so, i get your point about doom and gloom naysaying. but there’s a reason for the naysaying from my perspective. and it comes down whether i can trust individuals to be honest about their work and how they got there and being willing to learn, or whether they just want to skip to end.
essentially, it’s the same copy/pasting directly from SO problem that came before (and we’re all guilty of).
Oh, heck. We didn’t need AI to do that. That’s been happening forever.
It’s not just bad optics; it’s destructive. It discourages folks from learning.
AI is just another tool. There’s folks that sneer at you if you use an IDE, a GUI, a WYSIWYG editor, or a symbolic debugger.
They aren’t always boomers, either. As a high school dropout, with a GED, I’ve been looking up noses, my entire life. Often, from folks much younger than me.
It’s really about basic human personal insecurity, and we all have that, to some degree. Getting around it, is a big part of growing up, so a lot of older folks are actually a lot less likely to pull that crap than you might think.
> Apparently SO is not a problem anymore, but chatbots are.
I think the same tendency of some programmers to just script kiddie their way out of problems using SO answers without understanding the issue will be exacerbated by the proliferation of AI which is much more convincing about wrong answers.
It's not a binary. You don't have to hate or welcome chatbots, no in between. We all use them, but we all also worry about the negatives, same with SO.
And they were right. People who just blindly copy and paste code from SO are absolutely useless when it comes to handling real world problems, bugs etc.
Apparently from what I've read, universities are already starting to see this. More and more students are incapable of acquiring knowledge from books. Once they reach a point where the information they need cannot be found in ChatGPT or YouTube videos they're stuck.
I wonder if Google Gemini is trained on all the millions of books that were scanned and that Google were not able to be used for their original purposes?
https://en.m.wikipedia.org/wiki/Google_Books
As other AI companies argue, copyright doesn't apply when training, it should give Google a huge advantage to be able to use all the worlds books they scanned.
Interesting if that's literally true since you have to _search_ YouTube, unless maybe people ask chatgpt what search terms to use.
It's about putting together individual pieces of information to come up with an idea about something. You could get 5 books from the library and spend an afternoon skimming them, putting sticky notes on things that look interesting/relevant, or you could hope that some guy on Youtube has already done that for you and has a 7 minute video summarizing whatever it is you were supposed to be looking up.
I am unable to acquire knowledge from the books since 35 years ago. Had to get by with self-directed learning. The result is patchy understanding, but lot of faith in myself
If AI can't find it, how do we?
The actual intelligence in artificial intelligence is zero. Even idiots can do better than AI, if they want. Lazy idiots, on the other end...
We need to optimise for lazy though — lazy programmers are the best! Idiots, though, less so.
We don't know what intelligence is, so we don't know that.
"if they want" and "lazy" are the load-bearing phrases in your response. If we take them at face value, then it follows that:
1) There are no idiots who want to do better than AI, and/or
2) All idiots are lazy idiots.
The reason we're even discussing LLMs so much in the first place, is AI can and does things better than "idiots"; hell, it does things better than most people, period. Not everything in every context, but a lot of things in a lot of contexts.
Like run-of-the-mill short-form writing, for example. Letters, notices, copywriting, etc. And yes, it even codes better than general population.
By using your brain and a web search engine / searchable book index in your library / time or even asking a question somewhere public?
If it's been indexed by a Web search engine surely it's in a training dataset. The most popular Web search engines are the least efficient way of finding answers these days.
But just because it's in the training set doesn't mean the model retains the knowledge. It acquires information that is frequently mentioned and random tidbits of everything else. The rest can be compensated with the 20 web searches models more get. That's great when you want a react dropdown, but for that detail that's mentioned in one Russian-speaking forum and can otherwise only be reduced by analysing the leaked Windows XP source code AI will continue to struggle for a bit.
Of course AI is incredibly useful both for reading foreign language forums and for analysing complex code bases for original research. AI is great for supercharging traditional research
How does this make sense when you can put any book inside of an LLM?
Learning isn't just about rote memorisation of information but the continuous process of building your inquisitive skills.
An LLM isn't a mind reader so if you never learn how to seek the answers you're looking for, never curious enough to dig deeper, how would you ever break through the first wall you hit?
In that way the LLM is no different than searching Google back when it was good, or even going to a library.
Just because you can put information into an LLM, doesn't mean you can get it out again.
I recently interviewed a candidate with a degree in computer science. He was unable to explain to me how he would have implemented the Fibonacci sequence without chatGPT.
We never got to the question of recursive or iterative methods.
The most worrying thing is that the LLM were not very useful three years ago when he started university. So the situation is not going to improve.
This is just the world of interviewing though, it was the same a decade ago.
The reason we ask people to do fizzbuzz is often just to weed out the shocking number of people who cannot code at all.
Yep, I still can not understand how programmers unable to do fizzbuzz still have a sw engineer career. I have never worked with one like that, but I have seen so many of them on interviews.
I have seen IT consulting and service companies employ ‘developers’ who are barely capable of using software to generate views in a banking software package and unable to write a line of Java (which was the language used on the project).
When the client knows absolutely nothing about it and is not supported by someone competent, they end up employing just anyone.
This applies to both construction, IT and probably everthing.
Unless you're also a programmer, it's very difficult to judge someone else's programming ability. I've worked places where I was the only remotely-technical person there, and it would have been easy to fake it with enough confidence and few enough morals.
But it is. Some knowledge and questions are simply increasingly outdated. If the (correct) answer on how to implement Fibonacci is one LLM query away, then why bother knowing? Why should a modern day web developer be able to write assembly code, when that is simply abstracted away?
I think it will be a hot minute before nothing has to be known and all human knowledge is irrelevant, but, specially in CS, there is going to be a tremendous amount of rethinking to do, of what is actually important to know.
Not everyone does web dev. There are many jobs where it is necessary to have a vague idea of memory architecture.
LLM are very poor in areas such as real time and industrial automation, as there is very little data available for training.
Even if the LLM were good, we will always need someone to carry out tests, formal validation, etc.
Nobody want to get on a plane or in a car whose critical firmware has been written by an LLM and proofread by someone incapable of writing code (don't give ideas to Boeing ).
The question about Fibonacci is just a way of gently bringing up other topic.
My answer and example does not really care about the specifics.
I see nothing mention here as something that a human inherently needs to concern themselves with, because none of these things are things that humans inherently care about. CS as a discipline is a layer between what humans want and how to make computers do these things. If todays devs are so far detached from dealing with 1s and 0s (which is not at all how it obviously had to develop) why would any of the other parts you mention be forever necessary given enough artificial intelligence?
Sure, it's fun (as a discipline) for some of us, but humans inherently do not care about computer memory or testing. A good enough AI will abstract it away, to the degree that it is possible. And, I believe, it will also do a better job than any human ever did, because we are actually really, really bad at these things.
The answer on how to implement Fibonacci is so simple that it is used for coding interviews.
Any difficult problem will take the focus out of coding and into the problem itself.
See also fizz-buzz, which it is even simpler, and people still fail those interview questions.
Not outdated. If you know the answer on how to implement Fibonacci, you are doing it wrong. Inferring the answer from being told (or remembering) what is a Fibonacci number should be faster than asking an LLM or remembering it.
> Inferring the answer from being told (or remembering) what is a Fibonacci number should be faster than asking an LLM or remembering it.
Really?
If I were to rank the knowledge relevant to this task in terms of importance, or relevance to programming in general, I'd rank "remembering what a Fibonacci number is" at the very bottom.
Sure, it's probably important in some areas of math I'm not that familiar with. But between the fields of math and hard sciences I am familiar with, and programming as a profession, by far the biggest (if not the only) importance of Fibonacci sequence is in its recursive definition, particularly as the default introductory example of recursive computation. That's all - unless you believe in the mystic magic of the Golden Ratio nonsense, but that's another discussion entirely.
Myself, I remember what the definition is, because I involuntarily memorize trivia like this, and obviously because of Fibonacci's salad joke. But I wouldn't begrudge anyone in tech for not having that definition on speed-dial for immediate recall.
I assume in an interview you can just ask for the definition. I don't think the interview is (should be) testing for your knowledge of the Fibonacci numbers, but rather your ability to recognize precisely the recursive definition and exploit it.
Already knowing the Fibonacci sequence isn't relevant:
> He was unable to explain to me how he would have implemented the Fibonacci sequence without chatGPT.
An appropriate answer could have been "First, I look up what the Fibonacci sequence is on Wikipedia..." The interviewee failed to come up with anything other than the chatbot, e.g. failed to even ask the interviewer for the definition of the sequence, or come up with an explaination for how they could look it up themselves.
the point is that it's an easy problem that basically demonstrates you know how to write a loop (or recursion), not the sequence itself
Why should my data science team know what 1+1 is, when they can use a calculator? It's unfair to disqualify a data scientist just for not knowing what 1+1 is, right?
I’m 10 years in software and was never bothered to remember how to implement it the efficient way and I know many programmers who don’t know even the inefficient way but kick ass.
I once got that question in an interview for a small startup and told the interviewer: with all due respect what does that have to do with the job I’m going to do and we moved on to the next question (still passed).
You don’t need to memorize how to compute a Fibonacci number. If you are a barely competent programmer, you should be capable of figuring it out once someone tells you the definition.
If someone tells you not to do it recursively, you should be able to figure that out too.
Interview nerves might get in your way, but it’s not a trick question you need to memorize.
But I'm sure there would be some people that given the following question would not be able to produce any code by themselves:
"Let's implement a function to return us the Nth fibonnaci number.To get a fib (fibonacci) number you add the two previous numbers, so fib(N)=fib(N-1)+fib(N+2). The starting points are fib(0)=1 and fib(1)=1. Let's assume the N is never too big (no bigger than 20)."
And that's a problem if they can't solve it.
OTOH about 15 years ago I heard from a friend that interviewed candidates that some people couldn't even count all the instances of 'a' in a string. So in fact not much has changed, except that it's harder to spot these kind of people.
i’m around 10 years as well and i can’t even remember how the fibonacci sequence progress off hand. I’d have to wikipedia it to even get started.
There's nothing wrong with that. But once the interviewer tells you that the next number is the sum of the previous two, starting with 0 and 1, any programmer with a pulse should be good to go.
Well, I found these candidates long before ChatGPT
I interviewed a guy with a CCIE who couldnt detail what a port forward was. 3 years ago.
If some interviewer asked me what recursion was or how to implement it, I'd answer, and then ask them if they can think of a good use case for a duff's device.
Duff's device hasn't relevant for over 25+ years and there is no reason why anybody who learnt to program within the past 20 years should even know what it is, while recursion is still often the right answer.
That's a great question because there are 3 levels of answer: 1) I don't know what recursion is 2) This is what recursion is 3) This is what iteration is
Why? It looks like a reasonable interview question.
I suppose it depends on the position, but if your company is building CRUD apps with Postgres and you're asking candidates about Fibonacci, you're wasting both your time and, more importantly, the candidate's time.
Instead, you're better off focusing on relevant topics like SQL, testing, refactoring, and soft skills.
These "clever" questions are pointless. They either serve to stroke the interviewer’s ego: "Look, I know this CS 101 problem because I just looked it up!", or to create a false image of brilliance: "Everyone here can invert binary trees!"
Becoming the single point of failure for every societal function is the goal of VC. It's working brilliantly thus far.
You could say the same about Google, where there's an entire generation who no longer needed to trawl through encyclopedias to find relevant information. I think we're doing just fine.
Actually I did. Saw some kids just googling an exercise question instead of googling the topic. Trying to find the answer to the question while avoiding to understand the underlying topic.
They’ll just mumble something about the context window needing to be larger.
Should we call them the “AI generation generation”?
I vote for GenAI
Is GenAI after gen alpha? I think it depends on whether agents become a thing. Assuming agents become a thing before the end of this decade, we could see a divide between people born before we had ai agents and after.
I think we already saw this manifestation a few decades ago, with kids who can't program without an IDE.
IDE's are fantastic tools - don't get me wrong - but if you can't navigate a file system, work to understand the harness involved in your build system, or discern the nature of your artefacts and how they are loaded and interact in your target system, you're not doing yourself any favours by having the IDE do all that work for you.
And now what we see is people who not only can't program without an IDE, but can't be productive without a plugin in that IDE, doing all the grep'ing and grok'ing for them.
There has been a concerted effort to make computers stupider for stupid users - this has had a chilling effect in the development world, as well. Folks, if you can't navigate a filesystem with confidence and discern the contents therein, you shouldn't be touching the IDE until you can.
I have (older, but that generation) colleagues who simply stop working if there’s something wrong with the build process because they don’t understand it and can’t be bothered to learn. To be fair to them, the systems in question are massively over complicated and the project definitions themselves are the result of copy-paste engineering.
Unfortunately they also project their ignorance, so there’s massive pushback from more senior employees when anyone who does understand tries to untangle the mess and make it more robust.
The same thing will happen with these ML tools in the future, mark my words: writing code will come to be seen as “too complex and error prone” and barely working, massively inefficient and fragile generated code bases will be revered and protected with “don’t fix what isn’t broken”
I worked early in my career with a developer who printed out every part of the codebase to review and learn it. He viewed text search tools, file delineations, and folder structures as crutches best avoided.
Yes indeed, that is a perfectly reasonable approach to take, especially in the world of computing where rigorous attention to detail is often rewarded with extraordinary results.
I have very early (and somewhat fond) memories of reviewing every single index card, every single hole in the punch-tape, every single mnemonic, to ensure there were no undiscovered side-effects.
However, there is a point where the directory structure is your friend, you have to trust your colleagues ability to understand the intent behind the structure, and you can leverage the structure to gain great results.
Always remember: software is a social construct, and is of no value until it is in the hands of someone else - who may, or may not, respect the value of understanding it...
I already consider reading to be a superpower. So few people seem capable these days. And we're talking about people born decades before ChatGPT.
It is truly astonishing how bad things have become, so quickly. Just the other day, I found myself debating someone about a completely fabricated AWS API call.
His justification was that some AI had endorsed it as the correct method. There are already out there salaried professionals supporting such flawed logic.
Conversely, it's easier than ever to read said source material.
Kinda like the kids who only learned to drive with Waze.
When they are stuck without it, they get hopelessly lost. They feel strangled, distracted, and find it hard to focus on the road. For about 3 days. Then they pretty quickly get up to speed, and kinda remember where they used to make the left turns when they had GPS, and everything is dandy.
But it happens so infrequently, that it's really not worth the time thinking about it.
And the truth is, I am just guessing what happens after 3 days - since anyone who grew up with GPS will never be that long without it.
Sounds like 'Daddyy! Do this for mee!' level.
Otherwise I recall an old saying: 'Robots will take the job of engineers as soon as being able to figure out and make what the client needs, not what the client asks. I think we are safe'. Along this, I hope AI becomes a good tool, subordinate, or maximum a colleague. But not a father figure for big infants.
Please elaborate on what you think the issues will be? Why read documentation when you can simply ask questions and get better contextualised answers faster? Whatever edge issues manifest they will be eclipsed by greater productivity.
Because learning improves your internal LLM which allows you to creatively solve tasks without using external ones. Additionally it is possible to fine tune your internal LLM for the tasks useful for your specific interests, the fun stuff. And external llms are too generalised
Because back in my day, we convinced ourselves that the effort was the goal, not the result.
It was also fun, though. Unfortunately, taking up programming as a career is a quick way to become disillusioned about what matters (or at least what you're being paid for).
The issue is, glib understanding replaces actual application.
Its one thing to have the AI work through the materials and explain it.
Its another thing to have lost a lot of the background information required to sustain that explanation.
Greater productivity, is of course, a subjective measure. If that's all that matters, then being glib is perfectly acceptable. But, the value of that productivity may change in a different context - for example, I find it very difficult to go back to AI-generated code some months later, and understand it - unless I already took the time earlier to discern the details.
People said that about the internet as well. These kids don't even find the right books in a library!
disclaimer: not a programmer for a living.
I asked specifically the AI i interact with not to generate code or give code examples, but to highlight topics i need to better my understanding in to answer my own questions. I think it enhances my personal competences better that way, which i value above 'productivity'. As i learn more, i do become more efficient and productive.
Some of the recommendations it comes with are hard programming skills, others are project management oriented.
I think this is a better approach personally to use this kind of technology as it guides me to better my hard and soft skills. long term gains over short term gains.
Then again, i am under no pressure or obligation to be productive in my programming. I can happily spend years to come up with a good solution to a problem, rather than having a deadline which forces to cut as many corners as possible.
I do think that this is how it should be in professional settings, but respect a company doesn't always have the resources (time mostly) to allow for it. Its sad but true.
Perhaps someday, AIs will be far enough to solve problems properly, and think of the aspects of a problem the person sending the question has not. AIs can generate quite nice code, but only as good as the question asked.
If the requester doesn't spend time to learn enough, they can never get an AI to generate good code. It will give what you ask for, warts and all!
I did spend some time trying to get AI to generate code for me. To me, it only highlighted the deficiencies in my own knowledge and ability to properly formulate the solution I needed. If i take the time to learn what is needed to formulate the solution fully, i can write the code to implement it myself, so the AI just becomes an augment to my typing speed, nothing else. This last part, is why i beleive it's better to have it guide my growth and learning, rather than produce something in the form of an actual solution (in code or algorithmically).
Some time circa late 1950s, a coder is given a problem and a compiler to solve it. The coder writes their solution in a high level language and asks the compiler to generate the assembly code from it. The compiler: I cannot generate the assembly code for you, that would be completing your work ... /sarcasm
On a more serious note: LLMs now are an early technology, much like the early compilers who many programmers didn't trust to generate optimized assembly code on par with hand-crafted assembly, and they had to check the compiler's output and tweak it if needed. It took a while until the art of compiler optimization was perfected to the point that we don't question what the compiler is doing, even if it generates sub-optimal machine code. The productivity gained from using a HLL vs. assembly was worth it. I can see LLMs progressing towards the same tradeoff in the near future. It will take time, but it will become the norm once enough trust is established in what they produce.
> The productivity gained from using a HLL vs. assembly was worth it.
You can be very productive in a good assembler (for example RollerCoaster Tycoon and RollerCoaster Tycoon 2 were written basically solo by Chris Sawyer in x86 assembler). The reason was rather that over the generations, the knowledge of writing code in assembly decayed because it got used less and less.
I wonder if this was real or if they set a custom prompt to try and force such a response.
If it is real, then I guess it's because LLMs have been trained on a bunch of places where students asked other people to do their homework.
it's real, (but a reply on the forum suggests) Cursor has a few modes for chat, and it looks like he wasn't in the "agent" chat pane, but in the interactive, inline chat thingy? The suggestion is that this mode is limited to the size of what it can look at, probably a few lines around the caret?
Thus, speculating, a limit on context or a prompt that says something like "... you will only look at a small portion of the code that the user is concerned about and not look at the whole file and address your response to this..."
Other replies in the forum are basically "go RTFM and do the tutorial"!
Sounds like something you would find on Stack Overflow
Quite reasonable of it to do so I'd say.
The AI tools are good, and they have their uses, but they are currently at best at a keen junior/intern level, making the same sort of mistakes. You need knowledge and experience to help mentor that sort of developer.
Give it another year or two and I hope they the student will become the master and start mentoring me :)
My biggest worry about AI is that it will do all the basic stuff, so people will never have a chance to learn and move on to the more complex stuff. I think there's a name for this, but I can't find it right now. In the hands of a tenured expert though, AI should be a big step up.
Similar to the pre-COVID coding bootcamps and crash-courses we’ll likely just end up with an even larger cohort of junior engineers who have a hard time growing their career because of the expectations they were given. This is a shame but still, if such a person has the wherewithal to learn, the resources to do so are more abundant and accessible than they’ve ever been. Even the LLM can be used to explain instead of answer.
LinkedIn is awash with posts about being a ‘product engineer’ and ‘vibe coding’ and building a $10m startup over a weekend with Claude 3.5 and a second trimester foetus as a cofounder, and the likely end result there is simply just another collection of startups whose founding team struggles to execute beyond that initial AI prototyping stage. They’ll mistake their prompting for actual experience not realising just how many assumptions the LLM will make on their behalf.
Won’t be long before we see the AI startup equivalent of a rug-pull there.
Played a game called Beyond a Steel Sky (sequel to the older beneath a steel sky).
In the starting section there was an "engineer" going around fixing stuff. He just pointed his AI tool at the thing, and followed the instructions, while not knowing what he's doing at any point. That's what I see happening
I just did that with Cursor/Claude where I asked it to port a major project.
No code just prompts. Right now after a week it has 4500 compilation errors with every single file having issues requiring me to now go back and actually understand what its gone and done. Debatable whether it has saved time or not.
I did the same, porting a huge python script that had grown way too unwieldy to Swift. It undoubtedly saved me time.
Your project sounds a lot bigger than mine, 4500 is a lot of compilation errors. If it’s not already split into modules, could be a good time to do it piece by piece. Port one little library at a time.
I think it goes the same way as compilers so bits -> assembly -> c -> jvm and now you don't need mostly care what happens at lower levels because stuff works. With AI we are now basically the bits -> assembly phase so you need to care a lot about what is happening at a lower level.
To be honest you don't need to know the lower level things. It's just removing the need to remember the occasional boilerplate.
If I need to parse a file I can just chuck it a couple lines, ask it to do it with a recommended library and get the output in 15 minutes total assuming I don't have a library in mind and have to find one I like.
Of course verification is still needed but I'd need to verify it even if I wrote it myself anyway, same for optimization. I'd even argue it's better since it's someone else's code so I'm more judgemental.
The issue comes when you start trying to do complex stuff with LLMs since you then tend to halfass the analysis part and get led down the development path the LLM chose, get a mish mash of the AIs codestyle and yours from the constant fixes and it becomes a mess. You can get things implemented quickly like that, which is cool, but it feels like it inevitably becomes spaghetti code and sometimes you can't even rewrite it easily since it used something that works but you don't entirely understand.
Do you worry about calculators preventing people to master big number multiplication? That's an interesting question actually. When I was a kid, calculators were not as widespread and I could easily multiply 4-digit numbers in my brain. Nowadays I'd be happy to multiply 2-digit numbers without mistakes. But I carry my smartphone with me, so there's just no need to do so...
So while learning basic stuff is definitely necessary just like it's necessary to have understanding how to multiply or divide numbers of any size (kids learn that nowadays, right?), actually mastering those skills may be wasted time?
> actually mastering those skills may be wasted time?
this question is, i’m pretty sure it is safe to assume, the absolute bane of every maths teacher’s existence.
if i don’t learn and master the fundamentals, i cannot learn and master more advanced concepts.
which means no fluid dynamics for me when i get to university because “what’s the point of learning algebra, i’m never gonna use it in real life” (i mean, i flunked fluid dynamics but it was because i was out drinking all the time).
i still remember how to calculate compound interest. do i need to know how to calculate compound interest today? no. did i need to learn and master it as an application of accumulation functions? absolutely.
just because i don’t need to apply something i learned to master before doesn’t mean i didn’t need to learn and master it in order to learn and master something else later.
mastery is a cumulative process in my experience. skipping out on it with early stuff makes it much harder to master later stuff.
> Do you worry about calculators preventing people to master big number multiplication?
yes.
Honestly, I think we already see that in wider society, where mental arithmetic has more or less disappeared. This is in general fine ofc, but it makes it much harder to check the output of a machine if you can't do approximate calculations in your head.
> but they are currently at best at a keen junior/intern level
Would strongly disagree here. They are something else entirely.
They have the ability to provide an answer to any question but where its accuracy decreases significantly depending on the popularity of the task.
So if I am writing a CRUD app in Python/React it is expert level. But when I throw some Scala or Rust it is 100x worse than any junior would ever be. Because no normal person would confidently rewrite large amounts of code with nonsense that doesn't even compile.
And I don't see how LLMs get significantly better without a corresponding improvement in input data.
Still not enough.
Not only that, still very far away from being good enough. An opinionated AI trying to convince me his way of doing things is the one true way and my way is not the right way, that's the stuff of nightmares.
Give it a few years and when it is capable of coding a customized and full-featured clone of Gnome, Office, Photoshop, or Blender, then we are talking.
It's because they "nerfed" Cursor by not sending the whole files anymore to Claude, but if you use RooCode, the performance is awesome and above average developer. If you have the money to pay for the queries, try it :)
It's not reasonable to say “I cannot do that as it would be completing your work”, no.
Your tool have no say in the morality of your actions, it's already problematic when they censor sexual topics but the tool makers feel entitled to configure their tools only to allow you some kind of use for your work then we're speedrunning to dystopia (as if it wasn't the case already).
Exactly. The purpose of it is to help you, that is its one and only utility. It has one job, and it refuses to do it.
>Your tool have no say in the morality of your actions,
Asimov's "three laws of robotics" beg to differ.
Only if your actions would harm another human or yourself, right?
Anyhow in the caliban books the police robot could consider unethical and immoral things, and even contemplate harming humans, but it made it work really slow, almost tiring it.
Someone on my team complained to me about some seemingly relatively easy task yesterday. They claimed I was pushing more work onto them as I'm working on the backend and they are working on the frontend. This puzzled me so I tried it and ended up doing the work in about 1.5h
I did struggle through the poor docs of a relatively new library, but it wasn't hard.
This got me wondering: maybe they have become so dependent on AI copilots that what should have been an easy task was seen as insurmountably hard because the LLM didn't have info on this new-ish library.
Have you considered extra time it would take for some person other than yourself to get onboarded into the problem you are solving? It can quickly add 2-3 extra hours on top of that [seemingly easy] work.
I also didn't know the library nor have any exposure (yesterday was literally my first time looking at the docs for this particular library) and still got it done in 1.5h because someone has to do it.
And they've been at the company working with the FE stack longer than me by a few months!
I'm not even on the frontend team and decided to take the matter into my hands because I was pretty sure my ask wasn't too onerous so I wanted to double check. It had an out-of-the-box, first party add on package to do exactly what we needed to get data in the right shape on the backend, but the dev made it seem like I was pushing more work on FE. I'm just trying to get the right data format...
Maybe, but most people used to trust that their coworkers had the ability to quickly learn what needed to be learned to solve a small problem. AI has made people dependent on lose that skill a bit. I've seen myself people confronted with something they need to learn about and come back with something not much more than "AI said this, so here you go"
I think this is very real, and will act as a force towards center of gravity of the LLMs which means popular and established tech. It might even create a divide between ”prompt engineer engineers” and those who understand. That assumes using LLMs will not improve your understanding compared to traditional means which isn’t necessarily true. I dislike LLMs as coders but for learning new topics quickly they are better than obscure literature or googling for deep blog posts.
That said the pure prompt engineers might suffer similar fate as Stack Overflow engineers, for another reason: the limiting factor of building software is not shitting out code or even tests, it’s curbing the accumulated total complexity in the project as it grows. This is incredibly hard even for humans but the best engineers can reduce it. These most difficult problems have at least 2-3 properties that make them almost impossible for LLMs today: they’re non-local, hard to quantify and have enough constraints to mandate solutions that end in the fringes of training data set. Even simple self-contained LLM solutions introduce more complexity than necessary.
Nice example. If I were that person; from my perspective, I could say the EXACT same thing about my frustration with LLM.
That's my mental model, I am trying to make my AI peer to build something and it is complaining about not knowing this new-ish library.
They just didn't RTFM :)
The thing is, even I hadn't seen the docs before yesterday, but I could intuit through experience working with similar libraries that my ask wasn't that onerous.
The gist of it was to take the JSON representation of an editor and convert it to Markdown. Every popular editor library has an add on or option to export Markdown as well as import Markdown. But on import/export, you then need to almost always write a small transformer for any custom visual elements encoded as text.
Why MD? Because the user is writing a prompt so we need it as MD so it makes sense to transact this data in MD. It just so happens that the library is newish and docs are sparse in some areas. But totally manageable just by looking at the source how to do it.
I can't say for certain where the disconnect is in this whole thing, but to me it felt like "this isn't easy (because the LLM can't do it), so we shouldn't do it this way".
Clearly trained on Stack Overflow answers.
I hope they can do some reinforcement learning from Stack Overflow moderator decisions also
I guess that's straight out of the training data.
Quite common on reddit to get responses that basically go "Is this a homework assignment? Do you own work".
This is quite a lot of code to handle in 1 file. The recommendation is actually good in the past(month - feels like 1 year of planning) I've made similar mistakes with tens of projects - having files larger than 500-600 lines of code - Claude was removing some of the code and I didn't have coverage on some of them and the end result was missing functionality.
Good thing that we can use .cursorrules so this is something that partially will improve my experience - until a random company releases the best AI coding model that runs on a Rassbery Pi with 4GB ram(yes this is a spoiler from the future).
> I've made similar mistakes with tens of projects - having files larger than 500-600 lines of code
Is it a mistake though ? Some of the best codebase I worked on were a few files with up to a few thousands LoC. Some of the worst were the opposite, thousands of files with less than a few hundred LoC in each of them. With the tool that I use, I often find navigating and reading through a big file much simpler than having to have 20 files open to get the full picture of what I am working on.
At the end of the day, it is a personal choice. But if we have to choose something we find inconvenient just to be able to fit in the context window of an LLM, then I think we are doing things backward.
Claude seems to be somewhat OK with 1500 LOC in one file. It may miss something, mess something up, sure, that is why you should chunk it up.
I'm using Cursor & Claude/R1 on a file with 5000 loc, seems to cope OK
So the AI trained on Stack Overflow and Reddit and learned to say “Do your own homework”. I don’t see a problem.
This is probably coming from the safety instructions of the model. It tends to treat the user like a child and don't miss any chance to moralize. And the company seems to believe that it's a feature, not a bug.
Hah, that's typical Sonnet v2 for you. It's trained for shorter outputs, and it's causing it to be extremely lazy. It's a well known issue, and coding assistants contain mitigations for this. It's very reluctant to produce longer outputs, usually stopping mid-reply with something like "[Insert another 2k tokens of what you've been asking for, I've done enough]". Sonnet 3.7 seems to fix this.
Is it being lazy though? Or is it accidentally parroting an actual good mentor?
Not really, these assistants are all trained as yes-men, and the training usually works well.
It might be a conflict between shorter outputs and the "soft reasoning" feature that version of Sonnet has, where it stops mid-reply and reflects on what it has written, in an attempt to reduce hallucinations. I don't know what exactly triggers it, but if it triggers in the middle of the long reply, it notices that it's already too long (which is an error according to its training) and stops immediately.
(Or maybe I'm entirely wrong here!)
Ah the cycle of telling people to learn to code... First tech journalists telling the public, then programmers telling tech journalists, now AI telling programmers... What comes next?
How to deal with technical debt in AI generated code ?
Hope that LLMs get better faster than your tech debt grows, probably.
When I see juniors using LLMs, you cannot have technical debt because everything is recreated from scratch all the time. It's a disaster and no one learns anything, but people seem to love the hype.
ask AI to fix it
Get another job
They say the Ai coding assistants are like a junior developer ... sounds about right.
Predicted way back in 1971 in the classic movie “Willy Wonka and the Chocolate Factory”!
One of the many hysterical scenes I didn’t truly appreciate as a kid.
https://youtu.be/tMZ2j9yK_NY?si=5tFQum75pepFUS8-
That's Cursor Pro. What's the monthly subscription price for being patronized like that?
I recently saw this video about how to use AI to enhance your learning instead of letting it do the work for you.[0]
"Get AI to force you to think, ask lots of questions, and test you."
It was based on this advice from Oxford University.[1]
I've been wondering how the same ideas could be tailored to programming specifically, which is more "active" than the conceptual learning these prompts focus on.
Some of the suggested prompts:
> Act as a Socratic tutor and help me understand X. Ask me questions to guide my understanding.
> Give me a multi-level explanation of X. First at the level of a child, then a high school student, and then an academic explanation.
> Can you explain X using everyday analogies and provide some real life examples?
> Create a set of practice questions about X, ranging from basic to advanced.
Ask AI to summarize a text in bullet points, but only after you've summarized it yourself. Otherwise you fail to develop that skill (or you start to lose it).
---
Notice that most of these increase the amount of work the student has to do! And they increase the energy level from passive (reading) to active (coming up with answers to questions).
I've been wondering how the same principles could be integrated into an AI-assisted programming workflow. i.e. advice similar to the above, but specifically tailored for programming, which isn't just about conceptual understanding but also an "activity".
Maybe before having AI generate the code for you, the AI could ask you for what you think it should be, and give you feedback on that?
That sounds good, but I think in practice the current setup (magical code autocomplete, and now complete auto-programmers) is way too convenient/frictionless, so I'm not sure how a "human-in-the-loop" approach could compete for the average person, who isn't unusually serious about developing or maintaining their own cognitive abilities.
Any ideas?
---
[0] Oxford Researchers Discovered How to Use AI To Learn Like A Genius
https://www.youtube.com/watch?v=TPLPpz6dD3A
[1] Use of generative AI tools to support learning - Oxford University
https://www.ox.ac.uk/students/academic/guidance/skills/ai-st...
I think with programming, the same basic tension exists as with the smarter AI-enhanced approaches to conceptual learning: effectiveness is a function of effort, and the whole reason for the "AI epidemic" is that people are avoiding effort like the plague.
So the problem seems to boil down to, how can we convince everyone to go against the basic human (animal?) instinct to take the path of least resistance.
So it seems to be less about specific techniques and technologies, and more about a basic approach to life itself?
In terms of integrating that approach into your actual work (so you can stay sharp through your career), it's even worse, since doing things the human way doubles the time required (according to Microsoft), and adding little AI-tutor-guided coding challenges to enhance your understanding along the way increases that even further.
And in the context of "this feature needs to be done by Tuesday and all my colleagues are working 2-3x faster than me (because they're letting AI do all the work)... you see what I mean! It systemically creates the incentive for everyone to let their cognitive abilities rapidly decline.
Edit: Reposted this at top-level, since I think it's more important than the "implementation details" I was responding to.
It'll happen when these AI companies start trying to make a profit. Models will become unavailable and those remaining will jack up the prices for the most basic features.
From there these proompters will have 2 choices, do I learn how to actually code? or do I pay more for the AI to code for me?
Also it stops working well after the project grows too large, from there they'd need an actual understanding to be able to direct the AI - but going back and forth with an AI when the costs are insane isn't going to be feasible.
That’s great advice! A virtual office hours!
The brain is not a muscle, but it behaves like one with abilities you no longer use: it drops them.
Like speaking another language you once knew but haven’t used in years, or forgetting theorems in maths that once were familiar, or trying to focus/meditate on a single thing for a long time after spending years on infinite short content.
If we don’t think, then it’s harder when we have to.
Right now, LLMs tend to be like Google before all the ads and SEO cheating that made things hard to find on the web. Ads have been traded by assertive hallucinations.
These kinds of answers are really common, I guess you have to put a lot of work in to remove all those answers from training data. For example "no, I'm not going to do your homework assignment"
Perhaps Cursor has learned the concept of shakedowns. It will display stolen code only if you upgrade the subscription or sign a minerals deal.
Hmm this gave me an interesting project idea: a coding assitant that talks shit about your lack of skills and low code quality.
This has nothing to do with Claude. Otherwise all other Claude interfaces would be putting out this response.
That's actually pretty good advice. He doesn't understand his own system enough to guide the AI.
Moralist bias:
Then compilers are a clutch and we all should be programming in assembly, no matter the project size.
IMO, to be a good programmer, you need to have basic understanding of what a compiler does, what a build system does, what a normal processor does, what a SIMD process does, what your runtime's concurrency model does, what your runtime's garbage collector does and when, and more (what your deployment orchestrator does, how your development environment differ from the production...).
You don't need to have any understanding on how it works in any detail or how to build such a system yourself, but you need to know the basics.
Sure - we just need to learn to use the tools properly: Test Driven Development and/or well structured Software Engineering practices are proving to be a good fit.
Looks like an April's fools joke, but it's real :D
Can we be sure that this screenshot is authentic?
No. We cannot be sure that an audio or a video is authentic, let alone a screenshot.
Working with Cursor will make you more productive when/if you know how to code, how to navigate complex code, how to review and understand code, how to document it, all without LLMs. In that case you feel like having half a dozen junior devs or even a senior dev under your command. It will make you fly. I have tackled dozens of projects with it that I wouldn't have had the time and resources for. And it created absolutely solid solutions. Love this new world of possibilities.
Sounds like Claude 3.5 Sonnet is ready to replace senior software engineers already!
Thats correct answer
Damn, AI is getting too smart
I agree with Cursor
It's StackOverflow training data guiding the model... XD
So AI is becoming sentient ?
800 lines is too long to parse? Wut?
"I'm sorry Dave, I'm afraid I can't do that"
This message coming from within IDE is fine i guess. Any examples from writing software or excel?
Had extremely bad experience with Cursor/Claude.
Have a big Angular project, +/- 150 TS files. Upgraded it to Angular 19 and now I can optimize build by marking all components, pipes, services etc as "standalone" essentially eliminating the need for modules and simplifying code.
I thought it is perfect for AI as it is straight forward refactor work but would be annoying for a human.
1. Search every service and remove the "standalone: false"
2. Find module where it is declared, remove that module
3. Find all files where module was imported, import the service itself
Cursor and Claude constantly was losing focus, refactoring services without taking care of modules/imports at all and generally making things much worse no matter how hard "prompt engineering" I tried. I gave up and made a Jira task for a junior developer instead.
> I gave up and made a Jira task for a junior developer instead.
The true senior engineer skill.
Yes it feels like refactoring would be a perfect use case for LLMs but it only works for very simple cases. I've tried several times to do bigger refactors spanning several files with Cursor and it always failed. I think it's a combination of context window being not big enough and also tooling/prompting could probably be improved to better support the use case of refactoring.
Some IDEs like Intellij Idea have structured search&replace feature. I think there are dedicated projects for this task, so you can search&replace things using AST and not just text.
May be it would make sense to ask AI to use those tools instead of doing edits directly?
How is a 'guess-the-next-token' engine going to refactor one's codebase?
Just because everyone's doing it (after being told by those who will profit from it that it will work) doesn't mean it's not insane. It's far more likely that they're just a rube.
At the end of using whatever tool one uses to help refactor one's codebase, you still have to actually understand what is getting moved to production, because you will be the one getting called at midnight on Saturday to fix the thing.
Could this not be done algorithmically though? One can go reasonably far with static code parsing.
I would use find ... -exec, ripgrep and sed for this. If I have a junior or intern around I'd screen share with them and then have them try it out on their own.
Text files are just a database with a somewhat less consistent query language than SQL.
Sounds like Claude "has" ADHD lol
It doesn't have ADHD, it's more likely because they create too much small chunks in the recent versons of Cursor. So Cursor is looking at the project with a very small magnifying glass, and forgets what is the big picture (in addition to the context length issue).
Vibe coding is exactly like how Trump is running the country. Very little memory of history, shamefully small token window and lack of context, believes the last thing someone told it, madly hallucinating fake facts and entire schizophrenic ideologies, no insight into or care about inner workings and implementation details, constantly spins around in circles, flip flops and waffles back and forth, always trying to mitigate the symptoms of the last kludge with suspiciously specific iffy guessey code, instead of thinking about or even bothering to address the actual cause of the problem.
Or maybe I'm confusing cause and effect, and Trump is actually using ChatGPT or Grok to run the country.
Ah, I see Claude has trained a lot on internet resources.
This made my day so far.
BOFH vibe from this. I have also had cases of lazy ChatGPT for code generation, although not so obnoxious. What should be next - a digital spurs to nudge them in the right direction.
Attitude AI
Opinionated AI. On an opinionated laptop.
Oh what a middle finger that seemed to be. I had similar experience in the beginning with ChatGPT (1-2 years back?), until I started paying for a subscription. Now even if it's a 'bad idea' when I ask it to write some code (for my personal use - not work/employment/company) and I insist upon the 'ill-advised' code structure it does it.
I was listening to Steve Gibson on SecurityNow speaking about memory-safe programming languages, and the push for the idea, and I was thinking two things: 1) (most) people don't write code (themselves) any more (or we are going to this direction) thus out of the 10k lines of code, someone may manually miss some error/bug (although a second and third LLM doing code review may catch it 2) we can now ask an LLM to rewrite 10k lines of code from X-language to Y-language and it will be cheaper than hiring 10 developers to do it.
he smart
skidMark? what is this code? sounds like a joke almost... maybe it's some kind of April fools preparation that leaked too early
“Not sure if LLMs know what they are for (lol), but doesn’t matter as a much as a fact that I can’t go through 800 locs. Anyone had similar issue? It’s really limiting at this point and I got here after just 1h of vibe coding”
We are getting into humanization areas of LLMs again, this happens more often when people who don’t grasp what an LLM actually is use it or if they’re just delusional.
At the end of the day it’s a mathematical equation, a big one but still just math.
They don’t “know” shit
[dead]
Based AI. This should always be the response. This as boilerplate will upend deepseek and everything else. The NN is tapping into your wetware. It's awesome. And a hard coded response could even maybe run on a CPU.
Yup. My initial response was simply, "Now it's getting somewhere."