firmly of the belief that guitars are real

  • 0 Posts
  • 49 Comments
Joined 1 year ago
cake
Cake day: August 16th, 2023

help-circle


  • Tons of work being done to improve the energy efficiency of ML models. We’re in the ENIAC days of AI right now. I’m not sure I see the problem other than that it would obviously be nicer if we could just build a time machine and steal an energy efficient AI from the year 2100? But in the real world, R&D takes time, and while, globally, we do need to reduce energy use, that doesn’t mean we should give up on R&D, especially when ML actually has the potential to help us achieve higher energy efficiencies across the entire economy.

    Not like tons of uses for servers aren’t trivial and honestly kind of a waste. Okay, ML model’s energy use is a scandal, but Netflix and TikTok? Completely worth every joule.



  • Encrypting your disk only provides at-rest protection, meaning there are entire swathes of physical attacks it provides zero protection against. Tons of stuff a malicious actor can do during runtime with physical access that you’d never notice. it quite literally only protects against thugs smashing your door in and physically walking away with the disk.

    So if you’ve painted yourself into a corner with a baby’s first config, what you can do to step up your level of data protection (until you can redo your setup properly) is creating an encrypted filesystem or filesystem image (use fallocate to create a large empty file, then connect it to a loopback device, encrypt with LUKS, and use it as a virtual filesystem), rsync your data directory to it, and then unlock/mount it at boot under the directory where Nextcloud is configured to store your data. It’s god-awful, but this should be more or less transparent to Nextcloud if you do it right, and then at least your data directory gets at-rest encryption, and tbqh if someone is smash and grabbing your hard drive they are probably more interested in your data than they are your OS config.

    I wouldn’t say this is an acceptable or preferable alternative to FDE, but it sounds like you’re still figuring out the best ways to set these things up, and this will get you more protection than none. But, realistically, you should probably not worry about it too much and should think about the security of your setup as a learning exercise/study in best practices.


  • For a lot of brands their most valuable customers are middle and upper class people, and their tastes tend to veer prudish/judgmental/conservative on these things. IE with the example of a kitchen appliance manufacturer, we think that’s a very popular/cross-class purchasing category, except for the fact that for tons of us our kitchen appliances are chosen for us by our landlords and among homeowners, the only ones who are regularly going out and swapping out their kitchen appliances are the well-off ones. LG’s best customers, and this is true of most businesses, are their rich customers.

    For the most part, rich and affluent customers’ tastes count for more just because they can consume more, and many of the people in charge of advertising decisions at these companies are themselves middle or upper class, so it’s like a self-reinforcing ideological loop caused by structural economic inequality. The population at large’s opinion about whether shit like this even matters doesn’t really enter into the equation because what counts as “respectable” for companies is entirely decided on a per-dollar basis.


  • Yeah, but the fact it brought down some powerful people doesn’t mean it threatened the system as a whole. Things like Me Too become threatening to the system when they become widespread and ubiquitous and there’s a perception the ruling class isn’t interested in fixing it up. Also, many upper class people, particularly of course women, are survivors and are not interested in being further endangered by rape culture, so there was support from within the upper echelons about Me Too.

    I’m not saying that’s a bad thing, I’m saying that’s a little perpendicular to the whole question of whether or not Twitter was some kind of revolutionary working-class institution before Musk bought it. It was an influence marketplace. Everybody used it to buy and sell influence. Including progressive movements, and fascists. This had good, and bad, effects, and we shouldn’t put it on a pedestal.

    Musk bought it because he likes to very publicly fuck around with stock prices illegally, there’s been years of back and forth between him and the SEC over this. This is the man who tried to manipulate the stock price of Tesla to $420.69 a few years ago because he thought it would be funny and got charged with fraud by the SEC, then proceeded to use SNL to do a crypto pump-n-dump in real time on national television.

    His bluff just got called this very latest time, so he was forced to make the purchase, and he’s an idiot fascist.

    In reality, all the influence peddling and agitating is just moving to other platforms, and things will be more or less the same as they were after Twitter, sorry, “X” collapses.


  • The best part is, the MTPE workers’ output is 100% going to get fed back into the algorithms, so it’s only a matter of time before the average error rate of the models is good enough that there’s no real reason to pay anyone to look at it.

    People are, I think, overly optimistic that AI won’t eventually take their job. Why the hell do people think their boss wants to pay them plus pay for an AI? When they can, they’ll just switch as much as they can over to AI. We have a quantifiable error rate/range of error rates for most tasks, so all they have to do is create a model with an average error rate lower than, say, most people, and the case for employing us to even review their output goes out the window. It’s not like humans don’t make mistakes, so in reality, we 100% are in competition with AI for every task category. AI doesn’t have to be perfect to make us obsolete, it just has to be overall cheaper with an “acceptable” output quality.

    In fact, businesses often accept tradeoffs in quality in exchange for cost savings, so AI doesn’t even have to get better at something to replace us. If it costs 1% and is 80% as good, and that 20% drop in quality isn’t enough to affect their bottom line, you can bet your ass that humans won’t be doing that job anymore. I’ve already seen comments from copy writers about how they lost clients to ChatGPT on exactly these grounds. ChatGPT.

    Technology development takes a long time, I’m thinking 30-50 years out here, but the point is, this is a different kind of technological development than earlier ones. Future generations might, in fact, not have jobs at all.


  • You do realize that’s a marketing line about Twitter, right? It’s a private, for-profit corporation whose entire purpose is to inspire users to give away data about themselves for free. They don’t care what you think, they care how to manipulate you into buying things from them. Go ahead, buy your justice deodorant to wear to the protest. They’re very scared. After all, the government didn’t learn anything about counterinsurgency since the 1960’s, so it’s not like they know how to use all this data and surveillance to keep an eye on us.

    Trump was one of Twitter’s biggest users and they didn’t boot him until he tried to start a fascist insurrection and, again, made it impossible not to boot him. They’re an advertising platform first and foremost, they pandered to the far right as much as to anyone else, and the right got plenty of use out of that platform even before Musk took it over.

    The only way you could think otherwise was not actually having used Twitter since like, 2011.





  • There’s a thing I read somewhere – computer science has a way of understating both the long-term potential impact of a new technology, and the timelines required to get there. People are being told about what’s eventually possible, and they look around and see that the top-secret best in category at this moment is ELIZA with a calculator, and they see a mismatch.

    Thing is, though, it’s entirely possible to recognize that the technology is in very early stages, yet also recognize it still has long-term potential. Almost as soon as the Internet was invented (late 60’s) people were talking about how one day you could browse a mail-order catalogue from your TV and place orders from the comfort of your couch. But until the late 1990’s, it was a fantasy and probably nobody outside the field had a good reason to take it seriously. Now, we laugh at how limited the imaginations of people in the 1960’s were. Hop in a time machine and tell futurists from that era that our phones would be our TV’s and we’d actually do all our ordering and also product research on them, but by tapping the screen instead of calling in orders, and oh yeah there’s no landline, and they’d probably look at you like you were nuts.

    Anyways, considering the amount of interest in AI software even at its current level, I think there’s a clear pathway from “here” to “there.” Just don’t breathlessly follow the hype because it’ll likely follow a similar trajectory to the original computer revolution, which required about 20-30 years of massive investment and constant incremental R&D to create anything worth actually looking at by members of the public, and even further time from there to actually penetrate into every corner of society.



  • According to the article, they got an experimental LLM to reliably perform basic arithmetic, which would be a pretty substantial improvement if true. IE instead of stochastically guessing or offloading it to an interpreter, the model itself was able to reliably perform a reasoning task that LLM’s have struggled with so far.

    It’s rather exciting, tbh. it kicks open the door to a whole new universe of applications, if true. It’s only technically a step in the direction of AGI, though, since technically if AGI is possible every improvement like this counts as a step towards it. If this development is really what triggered the board coup, though, then it sort of makes the board coup group look even more ridiculous than they did before. This is step 1 to making a model that can be tasked with ingesting spreadsheets and doing useful math on them. And I say that as someone who leans pretty pessimistically in the AI safety debate.


  • Well, kind of. It’s a bad look for MS to be so heavily invested in such a dumpster fire of a corporation, so it’s okay for them it got resolved, but they would have won out more if Altman had joined them. It was the other investors, including a number of employees, who would have really lost out if the company had just collapsed in on itself like it immediately started doing. This got resolved sort of against MS’s best interests.

    So, sure, investor win. But MS more or less lost this one.

    I generally agree that it’s unlikely the non-profit structure is going to do its job here, I’ve seen something like it more or less work on a smaller scale with things that are less intensely of interest to the entire capitalist class, although I have no idea what kind of regulations we’d end up with considering how many oligopolists are involved.




  • Yeah, but the point is that if you want the real-world equivalent to the habitable worlds opened up by the protomolecule as regards Mars, that’s actually just the entire rest of the outer solar system, especially the moons and asteroid belt. They’re exactly as habitable as Mars will ever be in actual reality. Mars stops mattering except as an orbital pitstop as soon as there are places that are just as good if not better developed farther out, in smaller or non-existent gravity wells.

    Mars has no active geology, therefore no Van Allen belts, therefore the only shielding you get is if you bury yourself. And to generate the energy required to artificially generate Van Allen belts that can actually protect us from cosmic rays… first, it’s a preposterous amount, second, it’s energy rent you have to pay in perpetuity to get an inferior environment anyways and zero resources that aren’t available in greater abundance in cheaper gravity wells, because you’re not realistically going to be spinning up the core anytime soon. Then you need to initiate planetary-wide processes to erode the toxic regolith. The numbers just do not add up.

    Then there’s the 38% Earth gravity, which A - is likely to be as unhealthy as a spun-up semi-microgravity environment B - isn’t strong enough to retain any atmosphere thick enough to support humans, which means not only do you have to pay a continuous gargantuan energy rent just to one day walk on the surface without being killed by cosmic rays, you also have to import atmosphere which you’re guaranteed to have to replace.

    I enjoy the Expanse, but in spite of its hard science reputation it’s honestly about as realistic as Star Trek in a lot of ways. Terraforming Mars is a fun thought experiment but Jules Verne level out of date at this point. Take it as an unrealistic backdrop for a very fun geopolitical space drama, not a realistic exploration of how space development would actually go. They needed a third power to make the politics complicated. Nobody’s ever gonna breathe the free air of Mars, that’s a fantasy, and that’s knowable today, which means it’ll never be invested in seriously.