Source : https://www.wwt.com/article/data-science-trends-2023
AI is transforming how we live and solve problems at work. Data science is how AI happens. Here are the data science trends we see for 2023. Some of 2022's greatest technology advancements were driven by data science, and the data science industry has evolved at such a rapid pace that it can often be hard to keep up. To help navigate, we have compiled key data science trends in 2022 from the perspective of practitioners, why they are trending, and how they may continue in 2023. In 2023, we anticipate an increased need for AIOps and MLOps so that companies can remain competitive while having access to smart, real-time data and analytics. As conventional cloud computing becomes increasingly underequipped to manage copious amounts of data in 2023, edge computing will become crucial and generative models will allow companies to employ fresh content across industries. At the same time, companies are facing increasing pressure to ensure their data is safe, secure and can mitigate harm in all aspects. Data governance, ethical AI and sustainable AI will continue to be central themes we see companies prioritizing throughout the year.
The first noteworthy trend of 2022 was Artificial Intelligence for IT Operations (AIOps), a term coined in 2016 by Gartner that pertains to the application of AI and AI-related technologies – from machine learning (ML) and natural language processing (NLP) to traditional IT team activities and tasks. AIOps is an umbrella term encompassing both domain-centric solutions, such as log monitors, application monitors and network monitoring, and domain-agnostic solutions, which operate across clouds and on-premise infrastructure. While this practice is not new, it has continued to be a trend that many businesses have leaned on in 2022 due to its recently advanced analytics capabilities. The year 2022 shined a light on the capabilities of AIOps, such as allowing companies to automatically spot and react to issues in real-time, and proactively alerting them to potential issues. Therefore, we foresee strong demand for more AIOps throughout 2023. More generally, we foresee a continued demand for automation throughout the next few years, so businesses can continue to become smarter and more predictive. This is also indicated in an industry study where 70 percent of respondents say their organizations were at least piloting automation technologies in 2022, up from 66 percent in 2020 and 57 percent in 2018. 2023 will be a year of maturity for AIOps.
In 2022, AI paved the way for many generative models to create content such as blogs, poetry and artwork. While Generative AI is not new, the recent convergence of several trends has made it possible to productize generative models and bring them to everyday applications. Even domain-oriented use cases gained attention – from converting x-ray or CT scans into photo-realistic images in healthcare to converting satellite images into map views in the transportation industry. The trend started in 2014 with the introduction of Generative Adversarial Networks (GANs), which created face images using random noise. Along with Variational Autoencoder (VAE), they ushered the trend of generating deepfakes. In 2017, we saw the introduction of transformers, a deep learning architecture that supports large language models (LLMs) such as GPT-3. By 2021, OpenAI released the text-to-image generator DALL-E, which introduced techniques such as CLIP and Diffusion to generate high-resolution images with stunning details. Finally, by the end of 2022, we were introduced to ChatGPT, which brought the power of an LLM into a conversational chatbot. We anticipate increased adoption of Generative AI by the mainstream, not only across small businesses but also within BigTech companies (e.g., Microsoft is already actively investing in OpenAI with an eye to its own search products and implementing it in its Office suite). Generative AI also allows synthetic data generation that could improve model performance and save time and costs in ML deployments. However, the generative industry also brings ethical complications, such as the use of deepfakes producing undesirable outputs, e.g., spreading misinformation. We're shifting from an era of generating blurry, fingernail-sized black-and-white images of faces to a time where lone amateurs can copy an artist's style and companies are selling AI-generated prints that are explicit knock-offs of living designers. Questions of legality and ethics will inevitably become more pressing and more important. In the coming years, organizations will need to ensure the responsible and ethical use of these algorithms.
Physics-informed AI Humankind has always been inspired by nature. Over the years, our understanding has grown from the laws governing the physical world to the inner working of the human genome. We are now closer to extracting this potential, and this has numerous new industrial applications. AI methods like Deep Neural Networks rely on large data with examples of real-world behavior to learn. For vision-based applications where it is possible to gather this huge amount of data, these methods work well but fail to learn complex behaviors that lack adequate data. Physics-informed AI refers to a new set of AI methods that attempt to infuse the physical knowledge of our world. These approaches bring a new AI scale to science and open the realm of possible industrial applications into the metaverse. Digital twin is one such tool, which creates a virtual replica of the real-time environment. Companies are creating "digital twins" of warehouses and factories, for example, to optimize their layout and logistics. Companies such as NVIDIA have already been working on frameworks, which will be helpful in the future. Composite AI Composite AI, also known as Multidisciplinary AI, is defined as the combined application of different AI techniques to improve the efficiency of learning and broaden the level of knowledge representations. Organizations face more complex problems than ever before, involving vast quantities and multiple types of data. To solve this, we require a combination of AI solutions – varying from traditional machine learning approaches, NLP and graph techniques to traditional rule-based approaches. An example is using predictive AI, forecasting technique and conversational AI in the retail industry to identify consumer preferences and offer incentives for customers to make additional purchases during the return process. According to Gartner, by 2025, 70 percent of organizations will shift their focus from big data to "small and wide data" (i.e., using fewer and more diverse data sources to drive insights). Composite AI can help by combining human expertise with approaches like few-shot learning and synthetic data generation. This will not only embed more reasoning and intelligence but will also expand the scope and quality of AI applications.
Edge computing occurs close to where data is collected, enabling real-time decision-making based on data collected from internet-connected sensors on factory floors, transport networks, retail outlets and remote locations. By 2025, Gartner predicts that more than 50 percent of enterprise-managed data will be created and processed outside the data center or cloud. We are seeing continuous growth in edge analytics, especially in use cases where reduced latency is required to act upon real-time data. To facilitate this, edge is now getting "foggy." Fog Computing, as it's known, is "edge to the edge." It brings processing to edge further in terms of more computation, storage and communication, facilitated by "micro" data centers. As the company Meta continues to invest in more network opportunities for the Metaverse, it has seen how edge computing can take pressure off the public cloud, thus providing a faster customer experience. In the next two to five years, chips and operating systems will get optimized for edge and companies will adopt ultralow latency 5G networks – both core components of the Industrial internet of things (IIoT). This will help accelerate the next generation of automation and will open a variety of new AI use cases.