< Back to Blog Page

Machine Learning Models: The Legal Need for Editability (Part I)

Machine Learning Models: The Legal Need for Editability (Part I)

Steven Carlson, Roger Bodamer and Dr. Michael Meehan writing for IPWatchdog discuss the importance of having the proper rights to training data when building machine learning models and systems.

A widespread concern with many machine learning models is the inability to remove the traces of training data that are legally tainted. That is, after training a machine learning model, it may be determined that some of the underlying data that was used to develop the model may have been wrongfully obtained or processed. The ingested data may include files that an employee took from a former company, thus tainted with misappropriated trade secrets. Or the data may have been lawfully obtained, but without the adequate permissions to process the data. With the constantly and rapidly evolving landscape of data usage restrictions at the international, federal, state, and even municipal levels, companies having troves of lawfully-obtained data may find that the usage of that data in their machine learning models becomes illegal.

Read the full article at the link below.

Sign Up for the Newsletter

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

©Copyright ML4Patents | Powered By Patinformatics