By Rajesh Jaluka
There’s been a notable increase in demand for legislation to guide the future of generative AI technology. This renewed interest in the role government plays in technology raises the question: does technology influence legislation or vice versa?
I recently read an article that caught my attention. Not because it called for the US government to regulate technological innovation but instead to leverage technology to shape the legislative process. US lawmakers are now pushing to establish a bipartisan commission that calls on experts to collect, review, analyze, and make recommendations to Congress using data to drive evidence-based policymaking.1
As a proponent of truth in data, I applaud the lawmakers advocating for evidence-based policies. As a technologist who has worked with the public sector for nearly two decades, I’m passionate about the role technology can play in helping lawmakers deliver on their evidence-based promise.
The promise of evidence-based policymaking
The new initiative would not only create more transparency, but also enable lawmakers to harness the power of data to gain deep insights for future policymaking and to prioritize the services government delivers to its citizens. But data can be challenging to find across different systems and organizations. It can be even harder to clean and normalize its format, all of which creates huge barriers for lawmakers hoping to leverage data for their policymaking.
Technology can change that. Here are just a few examples of how a modern, robust technology foundation can provide the necessary tools to leverage existing data for effective policymaking:
- Extraction: Data lives in diverse formats across different organizations and systems. Data integration technology has significantly matured to make it easier to extract from these diverse sources.
- Normalization: Data in these systems could be in varying structured and unstructured formats. Advances in machine learning can help normalize the structure and bring uniformity.
- Cleansing: Data quality vastly varies due to differences in validation rules or lack thereof. Various algorithms and statistical methods can cleanse the data by identifying missing values, duplicate records, and inconsistent formats, removing duplicates, enriching existing data, and converting to a standardized format or structure.
- Analysis: Data science has recently emerged as a multi-disciplinary field that integrates domain knowledge with statistics and computer science to provide deep insights not only from structured data but also from noisy and unstructured data.