• 1 Post
  • 17 Comments
Joined 1 month ago
cake
Cake day: September 13th, 2024

help-circle

  • In fairness you cant just say its not a zero sum game when the article is supported with a quote from one individual saying they were glad it told them in some cases. We dont know how effective it is.

    This is normalizing very intimate (and automated) surveillance. Kids all have smart phones and can google anything they want when they arent using school hardware. If kids have any serious pre-meditation to do something bad then they will do it on their smartphones.

    The only reason this would be effective is to catch students before they are aware they are being watched (poof thats gone tomorrow), or the student is so dirt poor that they dont have a smart phone or craptop.

    And what else will the student data be used for? Could it be sold? It would certainly have value. Good intentions are right now… data is FOREVER.






  • No they will judge you as being above the law (original commenter) and they will be wrong, which doesnt matter, as long as we feel continuity with our synthesized narrative.

    Because truth doesnt matter. Our narrative just needs to be as loud as the opposition and then we can confuse people just like those in power… and then the impressionable people trying to understand whats going on or whats morally right will believe one side or the other and truth will not need to be discussed, because its not as catchy anyways.

    Then people wont need to be trusted to form their own worldview based on facts, they can neatly choose between a few curated viewpoints, and holding views from multiple viewpoints will isolate them from relevance when they are shunned for not memeing their ideologies like everyone else.





  • Theres no particular fuck up mentioned by this article.

    The company that conducted the study which this article speculates on said these tools are getting rapidly better and that they arent suggesting to ban ai development assistants.

    Also as quoted in the article, the use of these coding assistance is a process in and of itself. If you arent using ai carefully and iteratively then you wont get good results with current models. How we interact with models is as important as the model’s capability. The article quotes that if models are used well, a coder can be faster by 2x or 3x. Not sure about that personally… seems optimistic depending on whats being developed.

    It seems like a good discussion with no obvious conclusion given the infancy of the tech. Yet the article headline and accompanying image suggest its wreaking havoc.

    Reduction of complexity in this topic serves nobody. We should have the patience and impartiality to watch it develop and form opinions independently from commeter and headline sentiment. Groupthink has been paricularly dumb on this topic from what ive seen.