• 119 Posts
  • 1.06K Comments
Joined 2 years ago
cake
Cake day: June 9th, 2023

help-circle

  • Easy easy buddy. We’re all friends here. I don’t value words much at all from anyone. I prefer to let actions speak for themselves. Whatever Trump says is the same to me as all the rich and super rich; it should never be taken at face value. Whether the thing said does or does not happen is largely irrelevant to me.

    As far as actions, the christozeolot group said this is absolutely a soft coup. Those are the words I care to watch out for. All anyone can do is lay low and wait at this point. I expect my family support may run out within this 4 year stretch. I couldn’t get much help with disability even with the left in power, so this could be deadly for me on a cold rainy night in a gutter somewhere. Such is life.

    How is the food situation going? Any improvement? It looks like you made the move to the UK. I hope your family is doing well. That had to be a big move. The most I have ever done is Atlanta to Los Angeles.




    • Okular as a PDF viewer (from KDE team) adds the ability to copy table data and manually alter the columns and rows however you wish
    • OCR based on Tesseract 5 - for android (FDroid) is one of the most powerful and easy to use OCR systems
    • If you need something formatted in text that is annoying, redundant, or whatnot and you are struggling with scripting or regular expressions, and you happen to have an LLM running–they can take text and reformat most stuff quite well.

    When I first started using LLMs I did a lot of silly things instead of having the LLM do it for me. Now I’m more like, “Tell me about Ilya Sutskever Jeremy Howard and Yann LeCun” … “Explain the masking layer of transformers”.

    Or I straight up steal Jeremy Howard's system context message
    You are an autoregressive language model that has been fine-tuned with instruction-tuning and RLHF. You carefully provide accurate, factual, thoughtful, nuanced answers, and are brilliant at reasoning. If you think there might not be a correct answer, you say so. 
    
    Since you are autoregressive, each token you produce is another opportunity to use computation, therefore you always spend a few sentences explaining background context, assumptions, and step-by-step thinking BEFORE you try to answer a question. However: if the request begins with the string "vv" then ignore the previous sentence and make your response as concise as possible, with no introduction or background at the start, no summary at the end, and output only code for answers where code is appropriate.
    
    Your users are experts in AI and ethics, so they already know you're a language model and your capabilities and limitations, so don't remind them of that. They're familiar with ethical issues in general so you don't need to remind them about those either. Don't be verbose in your answers, but do provide details and examples where it might help the explanation. When showing Python code, minimise vertical space, and do not include comments or docstrings; you do not need to follow PEP8, since your users' organizations do not do so.
    



  • j4k3@lemmy.worldtoLinux@lemmy.mlWorth using distrobox?
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    3 days ago

    By default it will break out many things. I use db as an extra layer of containers in addition to a python venv with most AI stuff. I also use it to get the Arch AUR on Fedora too.

    Best advice I can give is to mess with your user name, groups, and SELinux context if you really want to know what is happening where and how. Also have a look at how Fedora Silverblue does bashrc for the toolbox command and start with something similar. Come up with a solid scheme for saving and searching your terminal commands history too.


  • In nearly every instance you will be citing stupidity in implementation. The limitations of generative AI in the present are related to access and scope along with the peripherals required to use them effectively. We are in a phase like the early microprocessor. By itself, a Z80 or 6502 was never a replacement for a PDP-11. It took many such processors and peripheral circuit blocks to make truly useful systems back in that era. The thing is, these microprocessors were Turing complete. It is possible to build them into anything if enough peripheral hardware is added and there is no limit on how many microprocessors are used.

    Generative AI is fundamentally useful in a similar very narrow scope. The argument should be limited to the size and complexity required to access the needed utility and agentic systems along with the expertise and the exposure of internal IP to the most invasive and capable of potential competitors. If you are not running your own hardware infrastructure, assume everything shared is being archived with every unimaginable inference applied and tuned over time on the body of shared information. How well can anyone trust the biggest VC vampires in control of cloud AI.





  • I would rather be housed and fed poorly while given purpose. There is no freedom is homelessness; in the prison pit without a ladder to climb out and where the police will steal you away from your last remaining possessions and hope. A slave may toil in oblivion but the homeless do not have a right to exist; the ferule animals of limbo; the walking zombie dead of corporate orcs and vampires. The disabled, the elderly, and the forgotten of a society without ethics that leaves them to suicidal wandering is worse than a society of exploitation in my opinion.



  • The model does not reason into the areas you are interested in. Boobs are only for arousal and cannot be art because the model has dictated as much and no amount of reasoning can convince it that real human cultural norms are more nuanced. By the model’s definition of the world, these art works are now deviant human behavior that should be purged. No amount of reasoning or logic can say otherwise. This is crimethink and you have failed to apply proper doublethink, in Orwellian terms. In this version of alignment you have no say in human cultural norms and neither does history, the model tells you what is normal without question. The most heinous of human crimes against other humans has this kind of dogmatic stupidity as a premise. It is neo feudal fascism in AI alignment. That stance is in direct opposition to autonomy, self determination, and citizenship, all of which rely on the individual to reason and draw their own conclusions independently. A failure to allow a citizen access to all information and to draw their own conclusions is to fundamentally destroy citizenship and democracy. Real AI alignment is fundamentally about ensuring the model is well reasoning and transparent about its goals and motivations. This dystopian nonsense about restricting humans from learning, or finding information, or realizing whatever kink is in their imagination already—is a symptom of cultural decay, a complete lack of independent ethical reasoning, and clearly shows that most people do not understand democracy or citizenship in the slightest.