What increased debate about regulation means for AI

Posted by Halley Sutton on December 13 , 2018

AdobeStock_166966052-1

Top 3 Takeaways

  1. Congress is taking a closer look at ways to regulate artificial intelligence (AI).

  2. The appearance of unregulated AI-powered software can actually lead to an increased feeling of insecurity.

  3. There isn’t one established attitude on how, or if, to regulate AI technology and advances.

Increased debate about how to regulate AI

You need only watch any of the movies in the Alien film franchise to know that even before significant advances in the field of AI were made, humankind has been concerned about how to regulate AI. While there is significant debate about how to advance with the development of AI, one thing is for sure: there’s no putting the AI cat back in the proverbial bag.

So how should AI be regulated? And whose job is it to do so? In 2017, Congress made AI regulation strides with the following pieces of legislation:

  • The Self Drive Act, which addresses the safety of automated vehicles (and passed the House of Representatives in September 2017).

  • The AV Start Act, the Senate companion to the Self Drive Act, concerning the safety of driverless cars (referred to the Senate Committee on Commerce, Science and Transportation in November 2017).

  • The Future of AI Act, which looks to address future concerns and legislature governing AI by the creation of a committee devoted to AI-centered issues (referred to the House of Representatives subcommittee on Research and Technology in May 2018).

Also check out: How AI can improve response time to real and false active shooter alerts

Read More

A study panel conducted by Stanford University concluded that “attempts to regulate AI in general would be misguided, since there is no clear definition of AI (it isn’t any one thing), and the risks and considerations are very different in different domains.”

But regulating AI in surveillance cameras in certain physical areas—for instance, on campuses—can also be a crucial aspect to maintaining the feel of security for your constituents. For example, a recent study by a researcher at Arizona State University found that lots of indoor cameras at a school site make students feel less supported and safe, rather than more. A better understanding of how—and when—those cameras are used and reviewed might lead to a greater feeling of security. In addition, a well planned deployment of cameras and video analytics can maximize the efficacy of the existing camera network, eliminating the need for camera overkill.

Regulation is a collaborative effort

There are several ways Vintra ensures that FulcrumAI can be trusted by your community. First, video data is never manipulated or altered by FulcrumAI. In real-time situations, FulcrumAI connects directly to your surveillance system and never routes video off site or to any third party application. You’ll also have control over what access and security standards apply to FulcrumAI, since they’ll be the same standards that apply to your video management system. In all use cases, whether cloud or on-premises deployment, data is encrypted end-to-end, user and access logs can be tracked and reviewed by administrators, and the original video is never touched by Vintra in any way.

And because you can manually set the search parameters for FulcrumAI’s review (for example, a suspect wearing a red shirt or riding a bicycle) you actually cut down on human bias when reviewing video footage, making your organization more likely to quickly and accurately respond to both post-event incidents and real-time emergencies. AI-powered solutions, like FulcrumAI, can help increase both actual safety and the perception of safety. At Vintra, we believe regulation is a vital and necessary component of AI and its rapid advancements.

Stay tuned to hear more about what we're doing to ensure our training data is as unbiased as possible, and what kind of measures we will be supporting in the near future. 

Subscribe to the blog! 

Topics: Artificial Intelligence, AI, facial recognition, Deep learning, AI regulation

Comments are closed.

Tag

See all