header banner
Default

A UK government committee directly asked Microsoft and Meta executives if they could recall an AI "model that is identified as being unsafe" They virtually sidestepped the question


Table of Contents

    The House of Lords communications and digital committee met today with Rob Sherman, VP of policy and deputy chief privacy officer for Meta, and Owen Larter, the director of global responsible AI public policy at Microsoft, to discuss large language models and some of the wider implications of AI. In a far-ranging discussion in which many words were said and not a lot of actual information conveyed, one particular tidbit caught our attention.

    When asked directly by the chair of the committee, Baroness Stowell of Beeston, as to whether either company was capable of recalling an AI model if it had been "identified as unsafe," or stopping it being deployed any further and how that might work, Rob Sherman gave a somewhat rambling response:

    "I think it depends on what the technology is and how it's being used … one of the things that is quite important is to think about these things upfront before they're released … there are a number of other measures that we can take, so for example, once a model is released there's a lot of work that what we call a deployer of the model has to do, so there's not only one actor that's responsible for deploying this technology…

    "When we released Llama, [we] put out a responsible use guide that talks about the steps that a deployer of the technology can do to make sure that it's used safely, and that includes things like what we call fine tuning, which is taking the model and making sure it's used appropriately…and then also filtering on the outputs to make sure that when somebody is using it in an end capacity, that the model is being used responsibly and thoughtfully."

    Microsoft's Owen Larter, meanwhile, did not respond at all, although in fairness the discussion was wide ranging and somewhat pushed for time. Regardless, the fact that Meta's representative did not answer the question directly but instead spun his response out into a wider point on responsible use by others is not entirely surprising.

    A lot was made over the course of the debate regarding the need for careful handling of AI models, and the potential risks and concerns this new technology may create.

    However, beyond a few token concessions to emerging use policies and partnerships created to discuss the issue, the debate quickly became muddied as both representatives struggled to define at points what it even was that they were debating.

    As Rob Sherman said helpfully earlier in the discussion in regard to the potential risks of irresponsible AI usage:

    "What are the risks that we're thinking about, what are the tools that we have to assess whether those risks exist, and then what are the things we need to do to mitigate them"

    While both participants seemed to agree that there was a "conversation to be had" about the issues discussed, neither seemed particularly keen on having that conversation, y'know, now. Each question was quickly answered with a fast flowing stream of potential policy, future risk assessment mechanisms and some currently ill-defined steps already being taken, the sum total of which seems to equate to "we're working on it".

    All this will come as little comfort to those concerned about the far reaching implications of AI and the potential risks of creating and releasing a technology that even the companies who create it struggle to pin down into meaningful terms.

    Today may have been an opportunity to lay down some steadfast plans as to how to regulate this increasingly important tool, but beyond the odd concession towards "security protections" and a "globally coherent approach", it seems progress is slow-going in regards to controlling and regulating AI in any meaningful way.

    Sign up to get the best content of the week, and great gaming deals, as picked by the editors.

    Sources


    Article information

    Author: Cindy Holmes

    Last Updated: 1702640762

    Views: 1100

    Rating: 4.2 / 5 (105 voted)

    Reviews: 85% of readers found this page helpful

    Author information

    Name: Cindy Holmes

    Birthday: 1974-07-20

    Address: 6653 Tracy Viaduct, Jefferyside, WA 55669

    Phone: +4848709910437266

    Job: Article Writer

    Hobby: Chocolate Making, Robotics, Hiking, Running, Calligraphy, Archery, Tennis

    Introduction: My name is Cindy Holmes, I am a spirited, resolved, dazzling, rich, strong-willed, unyielding, esteemed person who loves writing and wants to share my knowledge and understanding with you.