Slow June, people voting with their feet amid this AI craze, or something else?
It’s Summer. Students are on break, lots of people on vacation, etc. Let’s wait to see if the trend persists before declaring another AI winter.
Agreed. I think being between academic years is likely a much bigger factor than we realize. I’m a college professor, and at the end of spring quarter we had a lot of conversations with undergrads, grad students, and faculty about how people are actually using AI.
Literally every undergrad student I spoke with said they use it for every written assignment (for the large part in non-cheating legit educational resource ways). Most students used it for all or most of their programming assignments. Most use it to summarize challenging or long readings. Some absolutely use it to just do all their work for them, though fewer than you might expect.
I’d be pretty surprised if there isn’t a significant bounce-back in September.
This worries me though. I’ve found chatgpt to be wrong in basically every fact-based question I’ve asked it. Sometimes subtly, sometimes completely, but it always hallucinates. You cannot use it as a source of truth.
Honestly I feel like at this point its unreliability is kind of helpful for students. They have to learn how to use it most effectively as a tool for producing their own work and not a replacement. In my classes the more relevant “problem” for students is that GPT produces written work that on the surface feels composed and sensible but is actually straight up garbage. That’s good. They turn that in, it’s extremely obvious to me, and they get an F (because that’s the grade AI earned with the garbage paper).
But they can and should use it for things it’s great at: reword this long sentence I’m having trouble phrasing concisely, help me think of a title for my paper, take my pseudocode and help me turn it into a while loop in R, generate a list of current researchers on this topic and two of their most recent publications, translate this paragraph of writing from Foucault/Marx/Bourdieu/some-good-thinker-and-bad-writer into simpler wording…
I have a calculator in my pocket even though my teachers assured me I wouldn’t. Students will have access to and use AI forever now. The worry should be that we fail to teach them the difference between a homework-bot and an incredible, versatile tool to leverage.
I have been using it to do deep dives into subjects. Especially text analysis. Do you want to know the entire voc of the Gospel of Mark in original greek for example? 1080. Now how does this compare to a section of Plato’s republic of the same size? About 6-7x as large.
So right there we can see why Mark is often viewed as a direct text while Plato is viewed as a more ambiguous writer.
Mark is a direct and terse narrative of a specific segment of Jesus’s life and teachings while the republic is an attempt to expound a philosophy and system of government.
I agree with you, but I’m not sure I’d call him a more ambiguous writer, mark is a ‘just the facts, ma’am’ notation of verbal histories near contemporary, with the other gospels being attempts to add on contemporary allegories and legends attributed by different groups to Jesus (or John who just did his own thing).
I’d be curious at the comparison of the apology and crito, similar narratives of a similar figure in a specific segment of his life (the end of it). It’s fairly direct and terse as Socrates was portrayed as being direct and terse, but otherwise the styles are similar as (throw on hard hat) Jesus appears to have been attributed many of the allegories of Socrates in the recorded gospels, which makes sense if you’re trying to appeal to followers of hellenic religions such as those in Rome and Greece.
I think you’re being a bit self-centered, i’s always going to be summer somewhere. This is a tool used globally.
I see your point but:
- It’s not always summer somewhere, North and South are in spring/fall half the year.
- The global North has way more population than the south.
It’s summer somewhere half the time, but thank you for reminding them the southern hemisphere exists!
It’s because it’s summer and students aren’t using it to cheat on their assignments anymore.
It’s definitely this. Except the kids taking summer classes, which statistically probably have higher instances of cheating.
It’s not just that the novelty has worn off, It’s progressively gotten less useful. Any god damn question I ask gets 90,000 qualifiers and it refuses to provide any data at all. I think OpenAI is so terrified of liabilty they have significantly dumbed down it’s utility in the public release. I can’t even ask ChatGPT to provide a link to study it references, if it references anything at all rather than making ambiguous statements.
Also, ChatGPT 4 came out but is still only available to people who pay (as far as I know). So using ChatGPT 3 feels like only having access to the leftovers. When it first came out, that was exciting because it felt like progress was going to be rapid, but instead it stagnated. (Luckily interesting LLM stuff is still happening, it’s just nothing to do with OpenAI.)
I pay for it and it’s… Okay for most things. It’s pretty great at nerd stuff though*. Pasting an error code or cryptic log file message with a bit of context and it’s better than googling for 4 days.
*If you know enough to sus out the obviously wrong shit it produces every once in a while.
Pasting an error code or cryptic log file message with a bit of context and it’s better than googling for 4 days.
I usually can find what I’m looking for unless it’s really obscure with days of searching. If something is that obscure, it seems kind of unlikely ChatGPT is going to give a good answer either.
If you know enough to sus out the obviously wrong shit it produces every once in a while.
That’s one pretty big problem. If something really is difficult/complex you likely won’t be able to tell the difference between a wrong answer from ChatGPT and one that’s correct unless it just says something obviously ridiculous.
Obviously humans make mistakes too, but at least when you search you see results in context, other can potentially call out/add context to things that might not be correct (or even misleading), etc. With ChatGPT you kind of have to trust it or not.
Yeah if it’s that hard to find gpt is just going to hallucinate some bs into the response. I use it as a stack overflow at times and often run into garbage when I’m trying to solve a truly novel problem. I’ll often try to simplify it to something contrived but mostly find the output useful as a sort of spark. I can’t say I ever find the raw code it generates useful or all that good.
It’ll often give wrong answers but some of those can contain useful bits that you can arrange into a solution. It’s cool, but I still think people are oddly enamored with what is really just a talking Google. I don’t think it’s the game changer people are thinking it is.
It’s pretty useful if you’re in a more generalist job. I mostly work in visual design, but I sometimes deal with coding and web dev. As someone with a mostly surface understanding of these things, asking gpt to explain exact things that don’t make sense in basic terms or solve basic issues is a huge time saver for me. Googling these issues usually works but takes way longer than getting a tailored response from gpt if you know how to ask.
Chatgpt4 has also noticeably declined in quality since it was released too. I use it less because it’s become less useful and more frustrating to use. I think openAI have been steadily gimping it trying to get their costs down and make it respond faster.
I got it to give me a book that was still in copyright status by selectively asking for bigger and bigger quotes. Took a while. Now it seems to have cottoned on to that trick.
Well yeah it’s kinda cool but the novelty will wear off. It’s useful sometimes but it’s not a magic elixer.
I use it for quick dnd ideas. Need an NPC on the fly? Chatgpt will help you out
What a fantastic use case.
It’s really fucking annoying getting “As an AI language model, I don’t have personal opinions, emotions, or preferences. I can provide you with information and different perspectives on…” at the beginning of every prompt, followed by the driest, most bland answer imaginable.
Yeah, it’s boring as shit, if want a conversation partner there’s better (if less reliable) options out there, and groups like personal.ai that repackage it for conversation. There’s even scripts to break through the “guardrails”
I love the boring. Every other day, I think "man, I really don’t want to do this annoying task. I’m not sure if it even saves much time since I have to look over the work, but it’s a hell of a lot less mentally exhausting.
Plus, it’s fun having it Trumpify speeches. It’s tremendous. I’ve spent hours reading the bigglyest speeches. Historical speeches, speeches about AI, graduation speeches where bears attack midway through… Seriously, it never gets old
It definitely has its uses but it also has massive annoyances as you pointed out. One thing has really bothered me, I asked it a factual question about Mohammed the founder of Islam. This is how I a human not from a Muslim background would answer
“Ok wikipedia says this ____”
It answered in this long winded way that had all these things like “blessed prophet of Allah”. Basically the answer I would expect from an Imam.
I lost a lot of trust in it when I saw that. It assumed this authority tone. When I heard about that case of a lawyer citing madeup caselaw from it I looked it as confirmation. I don’t know how it happened but for some questions it has this very authoritative tone like it knows this without any doubt.
For my professional work, the training data is way too outdated by now for ChatGPT to be anywhere near being useful. The browsing feature also can’t make up for it, because it’s pretty bad at Internet search (bad search phrases etc).
i find even for really complex stuff it’s pretty good as long as you direct it: it can suggest some things, you can do some searching based on that, maybe give it a few links to summarise for you, etc
it doesn’t do the work for you, but it makes a pretty good assistant that doesn’t quite understand the subject matter
I’m old enough to not needing a babysitter to use the Internet for research.
It even told me a few times that its training data is too outdated and that there probably was some progress in that area. I have to freaking push it to actually do a web search to update that knowledge with prompts like “You have web access, use it!”. It then finds a few posts on stackoverflow I’ve already seen and draws some incorrect conclusions from that.
I’m way faster on my own.
Try out Bing, I like it a lot more over gpt. Works in Edge only though
In my experience, Bing Chat is even worse, because it skips the part where ChatGPT is trying to come up with something based on the training data and goes straight to bad web searches with incorrect summaries.
Hmm weird, for me it just tells me it doesn’t have good enough info to provide what I need
I also had that a few times, but it doesn’t make it any better.
your experience does not match mine
which is not saying that your experience is wrong or that you’re using it wrong, however i and many others have managed to get exceptionally good results out of it, and you should be aware of that fact
referring to these experiences as “needing a babysitter” is needlessly provocative as well; we’re all just talking here: no need to insult the intelligence of anyone that has managed to use the tool in a way that works incredibly well
i hope that at some point in the future, you’re able to have your experience match ours, and have a similar feeling of “ooooh i see now… wait… OOOOOOH I REALLY SEEEE NOW”
ChatGPT has mostly given me very poor or patently wrong answers. Only once did it really surprise me by showing me how I configured BGP routing wrong for a network. I was tearing my hair out and googling endlessly for hours. ChatGPT solved it in 30 seconds or less. I am sure this is the exception rather than the rule though.
It all depends on the training data. If you pick a topic that it happens to have been well trained on, it will give you accurate, great answers. If not, it just makes things up. It’s been somewhat amusing, or perhaps confounding, seeing people use it thinking it’s an oracle of knowledge and wisdom that knows everything. Maybe someday.
I love Stable Diffusion but I really have no use for ChatGPT. I’m amazed at how good the output can be… i just don’t have a need to generate text like that. Also, OpenAI has been making it steadily worse with ‘safety’ restrictions. I find it super annoying and even insulting when Bing-Sydney is “THIS CONVERSATION IS OVER”. It’s like being chastised by facebook or twitter for being ‘violent’ when you made a joke.
The ability to generate photographs and illustrations of practically anything, though, is fantastic. My girlfriend has been flagellating me into creating a bunch of really useless crap to promote her business on social media using SD, and I actually enjoy that part. I’ve made thousands of photos of scenery.
I use (free) ChatGPT only as tech support (with a large dose of scepticism of the results) so none of the ‘conversational’ limitations bother me
I didn’t find the image generation AIs as sticky for me, there’s not really anything I do day-to-day that would require a novel image
I use it now and again but I couldn’t imagine paying $20+ a month for it.
Personally I’ve abandoned ChatGPT in favor of Claude. It’s much more reliable.
I still use it sometimes, but ohhh boy it can be a wreck. Like I’ve started using the Creation Kit for Bethesda games, and you can bet your ass that anything you ask it, you’ll have to ask again. Countless times it’s a back-and-forth of:
Me: Hey ChatGPT, how can I do this or where is this feature?
ChatGPT: Here is something that is either not relevant or just does not exist in the CK.
Me: Hey that’s not right.
ChatGPT: Oh sorry, here’s the thing you are looking for. and then it’s still a 50-50 chance of it being real or fake.
Now I realize that the Creation Kit is kinda niche, and the info on it can be a pain to look up but it’s still annoying to wade through all the shit that it’s throwing in my direction.
With things that are a lot more popular, it’s a lot better tho. (still not as good as some people want everyone to believe)
Lol, Chat has it’s pros and cons. For helping me write or refine content, it’s extremely helpful.
However I did try to use it to write code for me. I design 3D models using a programming language (OpenSCAD) and the results are hilarious. Literally it knows the syntax (kinda) and if I ask it to do something simple, it will essentially write the code for a general module (declaring key variables for the design), and then it calls a random module that doesn’t exist (like it once called a module “lerp()” which is absolutely not a module) - this magical module mysteriously does 99% of the design… but ChatGPT won’t give it to me. When I ask it to write the code for lerp(), it gives me something random like this
module lerp() { splice(); }
Where it simply calls up a new module that absolutely does not exist. The results are hilarious, the code totally does not compile or work as intended. It is completely wrong.
But I think people are working it out of their system - some found novelty in it that wore off fast. Others like myself use it to help embellish product descriptions for ebay listings and such.
I’ve been building a tool that uses ChatGPT behind the scenes and have found that that’s just part of the process of building a prompt and getting the results you want. It also depends on which chat model is being used. If you’re super vague, it’s going to give you rubbish every time. If you go back and forth with it though, you can keep whittling it down to give you better material. If you’re generating content, you can even tell it what format and structure to give the information back in (I learned how to make it give me JSON and markdown only).
Additionally, you can give ChatGPT a description of what it’s role is alongside the prompt, if you’re using the API and have control of that kind of thing. I’ve found that can help shape the responses up nicely right out of the box.
ChatGPT is very, very much a “your mileage may vary” tool. It needs to be setup well at the start, but so many companies have haphazardly jumped on using it and they haven’t put in enough work prepping it.
Have you see the JollyRoger Telco - they’ve started using ChatGPT to help have longer conversations with telemarketing scammers. I might actually re-subscribe to the jolly roger (used them previously) if the new updated bots perform as well enough.
Lol that is brilliant use of it. I’ll have to check that out.
What method did you use to generate only JSON? I’m using it (gpt3.5-turbo) in a prototype application, and even with giving it an example (one-shot prompting) and telling it to only output JSON, it sometimes gives me invalid results. I’ve read that the new function-calling feature is still not guaranteed to produce valid json. Microsoft’s “guidance” (https://github.com/microsoft/guidance) looks like what I need, but I haven’t got around to trying it yet.
If you don’t mind me asking, does your tool programmatically do the “whittling down” process by talking to ChatGPT behind the scenes, or does the user still talk to it directly? The former seems like a powerful technique, though tricky to pull off in practice, so I’m curious if anyone has managed it.
Don’t mind at all! Yeah, it does a ton of the work behind the scenes. I essentially have a prompt I spent quite a bit of time iterating on. Then from there, what the user types gets sent bundled in with my prompt bootstrap. So it reduces the work for the user to simply entering a very basic prompt.
Ah, interesting. I myself have made my own library to create callable “prompt functions” that prompt the model and validate the JSON outputs, which ensures type-safety and easy integration with normal code.
Lately, I’ve shifted more towards transforming ChatGPT’s outputs. By orchestrating multiple prompts and adding human influence, I can obtain responses that ChatGPT alone likely wouldn’t have come up with. Though, this has to be balanced with giving it the freedom to pursue a different thought process.
I recently asked it about Nix Flakes, which were very niche and bew during ChatGPTs Training. It was able to give me a reasonable answer in English, but if I first asked it in German, it couldn’t do it. It could reasonably translate the english one though, after it generated that. Depending on what language you use to prompt it, you get very different answers, because it doesn’t do the transfer of ideas and concepts between languages or more generally, disconnected bodies of text sources.
It is somewhat obvious if you know about the statistical nature of the models they use, but it’s a great example of why these things don’t KNOW things, they just regurgitate what they read in context before.
I agree. And i think it actually far from being "intelligent ". However it is a very helpful tool for many Tasks.
I tried it for about 20 minutes
Had it do a few funny things
Thought huh that’s neat
Went on with life
Since then the only times I’ve thought about ChatGPT has been seeing people using it in classes I’m in and just sitting here thinking “this is a fucking introductory course and you’re already cheating?”
In discrete mathematics right now and overheard way too many students hitting a brick wall with the current state of AI chatbots. as if thats what they used almost exclusively up to this point
If only there was some way these student’s could’ve learned how to understand the material
I didn’t and don’t really care. Call me when there’s (free) AI that is good at dirty talk.
Orca 13b is coming out and is open source and can be run locally so you’ll get your wish really soon
Using it for work from time to time, mostly when I have issues with HTML/CSS or some quick bash scripts. I’d probably miss copilot more. It saves a lot of time with code suggestions.
I still use free GPT-3 as a sort of high level search engine, but lately I’m far more interested in local models. I havent used them for much beyond SillyTavern chatbots yet, but some aren’t terribly far off from GPT-3 from what I’ve seen (EDIT: though the models are much smaller at 13bn to 33bn parameters, vs GPT-3s 145bn parameters). Responses are faster on my hardware than on OpenAI’s website and its far less restrictive, no “as a large language model…” warnings. Definitely more interesting than sanitized corporate models.
The hardware requirements are pretty high, 24GB VRAM to run 8k context models, but unless you plan on using it for hundreds of hours you can rent a RunPod or something for cheaper than a used 3090.
What exact ones are you using and how can I use them?
This vid goes over it in better detail than I can.
Here is an alternative Piped link(s): https://piped.video/watch?v=199h5XxUEOY
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source, check me out at GitHub.