Announcement

Collapse
No announcement yet.

Random News Stories

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Continuing on the motoring theme:

    A driver lost control and plowed into glass windows at the front of a Lake Elsinore Urgent Care Thursday, but no one was injured.

    The crash happened about 11:45 a.m. at Accelerated Urgent Care, 29997 Canyon Hills Road, according to the Riverside County Fire Department.

    The agency said that several engine crews were sent to the location and discovered that the vehicle had been driven into the windows adjacent to the entranceway.

    There was broken glass, but the Urgent Care patients and staff escaped unscathed, as did the motorist, whose identity was not disclosed.

    Firefighters found the man sitting on the curb, waiting to speak with sheriff’s deputies, according to reports from the scene.

    The Urgent Care apparently remained open after the crash, which was under investigation.​
    In a radio report I heard (which prompted me to look for an online one to post here), it was claimed that the driver was speeding at the time of the accident, hence the irony in the name of the business he crashed into.

    Comment


    • https://www.theguardian.com/tv-and-r...owrunner-ai-tv

      A new AI service allows viewers to create TV shows. Are we doomed?

      Showrunner will let users generate episodes with prompts, which could be an alarming next step or a fleeting novelty

      ​One of the key strategies of streaming services is to keep you in front of a screen for as long as possible. As soon as one episode of a show you’re watching ends, the next one pops up automatically. But this approach has its limits. After all, when a series ends, Netflix will try to autoplay another series that it thinks you’ll like, but it has a terrible success rate. Maybe the tone of the suggested show is wrong, or maybe it’s too exhausting to be dumped into the sea of exposition that a new show brings. Maybe it’s just too jarring to be pulled out of one world and dumped straight into another without any space to breathe.

      You know what would fix that? If Netflix gave you the chance to automatically create a new episode of the show you were already watching. You’d stay there forever, wouldn’t you? It would be wonderful. Ladies and gentlemen, you will be thrilled to learn that this glorious technology now exists.

      This week, a company called Fable Studio announced the launch of Showrunner, the world’s first AI-generated streaming service. With a prompt of just a few words, Showrunner promises to allow viewers to write, voice and animate their own television episodes.

      Users who sign up for the Showrunner waitlist will eventually get to see 10 animated shows. One of them, Ikiru Shinu, is billed as a dark horror anime. Another, Sim Francisco, is an anthology show about people living in the titular city. And then there’s Exit Valley, a South Park-style Silicon Valley satire. Users can watch the episodes, or make their own by writing prompts that will be generated into scenes that can be stitched together into full episodes. For example, you can presumably watch Exit Valley and then type ‘The characters in his entertainment industry satire learn that they are part of an AI-generated content drive designed specifically to destroy the entertainment industry, and the satire explodes their heads’, and that’s what the next episode will be.

      The service isn’t entirely without precedent. Last year Fable released an AI-generated episode of South Park that, if you weren’t watching very closely, came off as fairly convincing. Of course, the moment you did start paying attention, the whole thing became a kind of living nightmare. The jokes were bad, the voices were wrong and everyone spoke with the blank intonation of someone who’d recently been brainwashed into murdering you in your sleep. But it’s early days. As we’ve seen with each successive ChatGPT release, AI can improve at a frightening pace. Before long, Fable might be able to generate a South Park episode that is actually good, and then we’re all in trouble.

      Clearly this could go one of two ways. The big fear – the thing that basically caused all of the Hollywood strikes last year – is that, even if Showrunner doesn’t become a mainstream success, the entertainment industry is nevertheless going to co-opt this technology wholesale. It will be slow at first: maybe a studio will use it to generate movie plots, which can then be finessed by the human experts it has to hand. But gradually that could fall away, until the entertainment industry consists of three or four executives writing AI prompts like ‘Dinosaur attacks girl with big boobs’ and keeping all the revenue for themselves.

      However, based on current evidence, that isn’t likely to happen just yet. The way it looks now, Showrunner has the unmistakable air of novelty. A flood of people will initially use it to make a bunch of low-quality videos that will turn the platform into an inexplicably less human TikTok or a Quibi that isn’t quite as embarrassing to say out loud. My theory is that everyone will create their own episodes at first, and try to share them, but nobody else will watch because they’re watching episodes that they generated themselves, and then everyone will get bored because what’s the point of making something just for yourself? The bar for creation has been set too low. People will lose interest fast.

      And this might be a good thing. God knows the movie industry needs all the help it can get right now. Maybe Showrunner exists as a reminder that the robots are even worse at making stuff than we are. If that doesn’t nudge us back to the mainstream, nothing will.​
      This whole AI thing seems to be a rocket on rails. A few years ago nobody cared, then chatgpt showed up and suddenly everybody's got an AI and it's the biggest thing since white bread.

      "I want to see a movie about X and Y with a happy ending" and the machine creates a movie on the spot just for you?

      Comment


      • Originally posted by Frank Cox View Post
        https://www.theguardian.com/tv-and-r...owrunner-ai-tv



        This whole AI thing seems to be a rocket on rails. A few years ago nobody cared, then chatgpt showed up and suddenly everybody's got an AI and it's the biggest thing since white bread.

        "I want to see a movie about X and Y with a happy ending" and the machine creates a movie on the spot just for you?
        Right now, the resources to do this are just too limited and the current models aren't good enough yet. But if you look at the progress that has been made over the last few years, it's not a matter of IF, but just a matter of WHEN this happens.

        We've been finetuning a bunch of open source models for about a year now and some of them have become scary good at stuff we otherwise had to do by hand.

        This whole generative AI thing is like Pandora's box, the devil is outside of the box, you will not get it back in there. Even if all what we have RIGHT NOW is already peak-AI, we're in for a ride, most people simply have no idea yet. This thing will be bigger than the coming of the Internet, it will have a profound effect on the way we look at work, but everybody is still sleeping. If governments should be afraid of something, then it's how they're going to handle this, unfortunately, most politicians are completely clueless about what's coming. Maybe we should replace them with AI first?

        Comment


        • I'm a "computer bug" but I really don't understand exactly what AI is. That video I posted a while back has a good explanation of how a LLM can write a sentence/paragraph/paper, but I get the impression that there's a lot more to it than that.

          Comment


          • And here we go...

            https://www.indiewire.com/news/break...ai-1235010605/

            Sony Will Use AI to Cut Film Costs, Says CEO Tony Vinciquerra

            ​The next “Spider-Verse” film may have a new animation style: AI.

            Sony Pictures Entertainment (SPE) CEO Tony Vinciquerra does not mince words when it comes to artificial intelligence. He likes the tech — or at the very least, he likes the economics.

            “We are very focused on AI. The biggest problem with making films today is the expense,” Vinciquerra said at Sony’s Thursday (Friday in Japan) investor event. “We will be looking at ways to…produce both films for theaters and television in a more efficient way, using AI primarily.”

            That’s about the strongest support for AI we’ve heard from a film studio head.

            Vinciquerra knows how controversial his comments could be with creatives.

            “We had an 8-month strike over AI last year,” Vinciquerra began his response to the first analyst question (from Nomura Securities) during his Q&A portion of the annual event. He also acknowledged that ongoing IATSE talks and the forthcoming Teamsters negotiations are “both over AI again.”

            The sum total of those discussions between Hollywood’s workers and its studios will inform just how far Vinciquerra and others can go.​
            I've said before and I'll say again that those who believe union contracts will stop the use of AI are in the position of King Canute against the tide.

            If Unionized Movie Studio can make a movie for $100 million and Non-Union Movie Studio can make a similar movie using AI for 98 cents, how long is the first one going to be able to continue operating?

            Comment


            • Originally posted by Frank Cox View Post
              I'm a "computer bug" but I really don't understand exactly what AI is. That video I posted a while back has a good explanation of how a LLM can write a sentence/paragraph/paper, but I get the impression that there's a lot more to it than that.
              You know, you're in good company, as OpenAI, you know, those guys and gals behind the thing called ChatGPT, doesn't know either how their shit works.

              AI is a very broad concept, it has been around in one form or another for decades now, but what's catching all the hype lately is generative AI. Generative AI seems to be able to interact with humans in a much more natural way and it shows some real forms of intelligence. While this intelligence isn't the same as human intelligence, it's undeniably a form of intelligence.

              The thing is, we build those giant neural networks, consisting of billions of nodes, based on the transformer architecture, we feed them with billions and billions of "information points" and at *some point* something like "intelligence" seems to emerge. Intelligence, to me, it seems, is a function of complexity and structure. Since we have no clear definition of "intelligence", it's also hard to draw a line between when something is intelligent and when something is not. It's probably also not something that can be defined by a hard distinctive line, but only by specifying something of a gradient.

              I'd love to explain to you how LLMs and associated beasts work, at least the general concept behind it, but I'm a little constrained by time and forum context. Also, I'm hampered by the same shortcomings that OpenAI also faces: I can't exactly explain what causes those emerging properties of intelligence to arise, although I have some theories, partly backed by some observations we made ourselves while training those LLMs for specific tasks.

              In general, we've built a brain-simulator to some extend. A combination of pure cheap processing power, memory and vasts amount of "public" information have enabled us to get here, after quite a few years of trying. While it doesn't exactly work like the brain, it's close enough for quite a few things we're trying to do with it...

              Comment

              Working...
              X