Auto-complete sentences in your Word documents and emails. Apps that anticipate your shopping behaviors. Voice-activated virtual assistants. Artificial intelligence (AI) has come a long way since the days of Clippy, the cheery little Microsoft help bot. And with the modern explosion of AI technology and its use cases, the definition of AI has become murky.

According to our colleague Gene Chao, Founder & Chief Executive of GenRe Ventures, we can break AI into four distinct categories: systems of engagement (think chatbots and automated customer service interactions), traditional machine learning (ICR/OCR and the like), reasoning applications (such as the popular ChatGPT) and “doing” applications of AI, in the vein of robotic process automation.

The fuel that drives all four of these AI areas? Content and data. As Chao puts it, “Algorithms don’t mean a hill of beans unless you instruct them to do something or point them toward a set of data to act upon. Right now, AI remains what I call deterministic. It is rules-based. It has boundaries by which it operates. Where it is going, very quickly, is becoming probabilistic. So, it recognizes errors and how to heal. It recognizes judgment; it recognizes so-called reasoning.”
This shift toward the probabilistic capabilities of AI has education and government leaders racing to understand the workforce efficiencies, data security, and ethical implications of the technology.

AI is moving fast

The speed at which AI-enabled technologies are evolving and growing is measured in days and months now, not years. As many secondary and higher education institutions have experienced, the introduction of ChatGPT immediately impacted institutional control of student work and learning. The ability to use such a powerful AI tool in a learning environment quickly raised ethical alarms and concerns over plagiarism and copyright infringement risks.

As noted in a recent Forbes article, New York City Public Schools effectively banned the use of the most popular form of the technology, ChatGPT, out of fear of students cheating. Many K-12 school systems and institutions of higher learning nationwide are viewing AI tools with a heavy dose of caution. In response, new AI text classifier products are already hitting the market, which help detect and deter the inappropriate use of ChatGPT and other tools.

Despite these concerns, AI developments also represent several positive use cases for education and the public sector. “If you look at the technology and toolsets in the right way, it can augment how you improve student learning and the pace at which you learn,” said Chao. At scale, AI also has the potential to transform the way educators and staff prepare, respond, and thrive in an ever-changing learning environment. Think of Admissions offices having greater insight to accurately forecast enrollment fluctuations or Student Life staff proactively adapting programming based on known student behaviors.

Concerns over “digital DNA” security

As they say, “with great power comes great responsibility.” With the proliferation of personal data sharing/access and increasingly sophisticated ways of using that data (or digital DNA), it’s only right to raise the data security flag. For educational institutions, cybersecurity is already a clear priority—recently reinforced at the White House’s Cybersecurity Summit for K-12 Schools. Most are actively taking steps to protect student information from external bad actors. But is there reason to worry about the integrity of student data when it is leveraged in the name of AI?

At Softdocs, we believe in providing assurances to our customers that their data is safe and private. We’re not going to do business with a solution, whether it’s OpenAI or someone else, who doesn’t allow us to uphold those commitments and satisfy those requirements with our clients. It’s a significant, critical part of the trust we have with our clients and core to our company values.

Impacts to how we work

In addition to augmenting instructional practices, the application of AI technologies holds tremendous promise for staff and administrators. The boom of content and data digitization has created an ocean of information and insights to power the effectiveness of AI tools. Education and government leaders see the potential for creating more intuitive and predictive student services. Imagine combining historical data with machine learning and predictive modeling to analyze, anticipate, and recommend information or actions that can improve the student experience across a variety of departments and functions. These powerful edtech tools are closer than you think!

Many Softdocs clients already run Etrieve reports, giving them visibility into internal process efficiencies, student engagement patterns, and more. AI will allow us to take that insight to the next level—from static data into actionable data—giving institutions and districts the benefit of analytical assistance they can immediately use to drive better outcomes.

“The data tells the story.” This commentary from our client, Francis Tuttle Technology Center, is one of my favorites. At Softdocs, we’re thinking about how we can use AI to surface information about specific processes, so we can help clients identify inefficiencies and actively solve them. In the same way, let’s identify what is working well and keep it going. AI-enabled technologies have the potential to create limitless opportunities for process improvement and organizational success, and we’re excited to help clients take the next step on this promising journey.

Watch our recent ‘What AI Means to Education’ webinar on-demand.