The future. It’s the topic on the mind of most business leaders—what’s going to happen in the future? And importantly, how do you ensure you’re prepared for it? Today, as technological advances impact the scale, scope, and utility of data and information, a new ecosystem of information and trust is taking shape around us. Businesses are responding by using and reporting information that goes beyond financial information.
Resource Search
While cyber liability losses and privacy claims continue to rise, a new exposure has arisen. Hackers have determined that due to the increased sophistication in computer security, it may be easier to manipulate an individual rather than a machine. With enough policies and procedures in place, Social Engineering Fraud (SEF) is preventable. However, what these professional criminals are counting on is being able to manipulate an employee to violate the company’s policies.
Financial reports and other real-time operational data are often lagging indicators of performance. These metrics, although perhaps lacking precision, may have been sufficiently effective in the past; however, they are less so now because they lag the current cadence of information dissemination and business volatility today. The challenge for many finance functions is to try to keep pace with all the modern sources of insight and analysis that internal and external stakeholders are receiving.
The Internet of Things (IoT) connects machines and devices to one another. Today’s devices have between 6 to 9 sensors capturing and transmitting data to help all industries become more efficient, productive and safer. The 2020 annual global economic potential across all sectors is estimated up to $14.4 trillion—that is the current GDP of the European Union. For businesses to fully realize the great potential of the Internet of Things (IoT), they will need to be prepared for the privacy, cybersecurity and liability risks that lie ahead.
With people, assets, and services becoming increasingly connected by software and hardware—the Internet of Things (IoT)—physical risks are now directly intertwined with digital risks. Will errors made by artificial intelligence be treated more like products liability or vicarious liability? Since IoT is so new, there is no definitive legal reference of concise volume of regulation on the topic.
The growth of ChatGPT and other artificial intelligence (AI) tools is not slowing down. From small startups to multinational corporations, employees across the spectrum are leveraging ChatGPT to enhance their productivity and streamline their workflows. Given the potential risks—including confidentiality and personal data and privacy violations—associated with the use of ChatGPT and similar tools, it’s crucial for companies to provide guidance to their employees.
Artificial Intelligence (AI), which is being considered “The Fourth Industrial Revolution,” is the latest innovation and technology disruption fueling growth and reshaping societies alike. While there are investment opportunities where big winners are reaping the benefits of AI, the future of AI remains uncertain. In this early stage of AI advancement, it is important to understand the inherent risks of concentrating portfolios in themes and trends—including AI—that are likely to evolve and shift over time.
As artificial intelligence (AI) and generative AI (GAI) continue to evolve and become integral to business operations, businesses must be mindful of the risks associated with deploying AI solutions. Although there is not yet a comprehensive law governing AI, regulators have tools to hold businesses accountable. They are focused on transparent and explainable AI solutions to ensure that consumers and key stakeholders understand how these systems operate and make decisions.
Many employers have begun using artificial intelligence (AI) tools supplied by third-party vendors. On May 18, 2023, the Equal Employment Opportunity Commission (EEOC) provided guidance indicating that, in its view, employers are generally liable for the outcomes of using selection tools to make employment decisions. Learn more about what tools are covered in the EEOC guidance that clarifies an employer’s responsibility for discrimination in AI employment tools.
The growing use of video and automated technology, including artificial intelligence (AI), in employment practices—and the concern that the technology may foster discrimination and bias—has triggered a wide array of regulatory efforts. At least 11 statutes have been introduced targeting the use of AI-related technology to assist with employment decisions. Employers should take note of enacted and proposed legislation and consult with legal counsel before implementing automated employment technologies.