Quantcast
Channel: data analytics - EnterpriseAI

After Pause, AI Software Market Expected to Soar

0
0

The upward advance of the global AI software market is forecast to rise between its current $16.4 billion level to as high as nearly $100 billion by 2025 as industry sectors like health care step up their rate of adoption.

While the novel coronavirus has prompted some downward revisions in the otherwise vibrant AI software market predictions, analyst Omdia issued a forecast range over the next five years from just under $60 billion to as high as about $130 billion. It’s “moderate” forecast totaling $98.8 billion represents a 22-percent downward revision from its pre-COVID forecasts of the cumulative market size through 2025.

The market tracker said its revised forecast accounts for pandemic-driven “retrenchment” in sectors like oil and gas as well as mining. Those declines flattened the AI software growth curve, with health care and the rise of remote work and education perhaps making up the difference in more optimistic scenarios.

Then there is the first-mover advantage, which Omdia said has widened the gap between aggressive and reluctant AI deployers.

“Economic effects from the COVID-19 pandemic have widened the dichotomy between early AI adopters—the ‘AI haves’—and the trailing followers—the “AI have nots’,” said Omdia senior analyst, Neil Dunay. “Industries that have pioneered AI deployments and have the largest AI investments are likely to continue to invest in what they view as proven, indispensable technology for cost cutting, revenue generation and enhancing customer experience.”

Omdia said it attempted to differentiate between AI hype and reality by identifying no less than 340 potential AI use cases across 23 industry sectors. Key applications driving the global AI software market include voice and speech recognition, video surveillance, virtual digital assistants for customer service and product marketing as well as IT monitoring and supply chain and inventory management.

Pre-pandemic, market forecasters were betting data analytics would continue as the main driver for AI, but some have concluded that deep learning-based vision and language applications will propel the AI software market over the long haul.

Dissected differently, IDC Corp. said last month the narrower AI software platform sector would slow through 2021, then accelerate “significantly” through 2024. It expects the platform segment to grow at an annual compound rate of 31.1 percent through 2024, reaching $13.4 billion in global revenues.

"Software and applications vendors are taking advantage of advanced machine learning, conversational AI, and other AI technologies to provide benefits for their customers and users as well as improve ROI and achieve cost savings,” said David Schubmehl, research director for IDC's AI software platforms practice.

The market tracker foresees “the emergence of a healthy AI software platforms market being fueled by startups and enterprise software vendors as well as a robust open source community contributing new algorithms, libraries, models and tools,” Schubmehl added.


ARM’s Segars Unwraps ‘5th Gen’ Computing Vision

0
0

The simultaneous maturation of three technologies—AI, the Internet of Things and 5G wireless—are ushering in a data-driven 5th wave of computing underpinned by current cloud infrastructure, according to the chief executive of chip intellectual property vendor ARM.

Speaking at an annual Defense Department microelectronics summit, Simon Segars said new chip architectures are being designed around IoT sensors networks that will generate huge data volumes collected and transported by emerging 5G networks. Those massive data sets can then be used to train machine learning models to extract useful information via new AI algorithms.

As DoD seeks to revive U.S. chip manufacturing, drive semiconductor innovation and secure its microelectronics supply chain, “You have to think about the underlying technologies, the science that drives the next evolution of all of these technologies,” Segars told this week’s Electronics Resurgence Initiative (ERI) summit sponsored by the Defense Advanced Research Projects Agency (DARPA).

“And that’s an area where I think governments can play a big role in helping stimulate research—because this stuff is really hard.”

Simon Segars

Segars echoed growing bipartisan calls for rebooting western chip manufacturing after decades of offshoring. The pandemic has further exposed technology supply chain vulnerabilities, fueling concerns among military planners that DoD will lose access to leading edge IC manufacturing capabilities.

Another key issue at the annual DARPA conference is forging a post-Moore’s Law technology strategy. Among ERI’s goals is figuring out the semiconductor industry’s next move with the “run out of Moore’s Law,” said Stephen Welby, former assistant secretary of defense for research and engineering.

Hence, ARM and other chip designers are looking to new architectures to squeeze what is left from Moore’s Law and Dennard scaling.

“Physics gets in the way [and] you just can’t keep scaling transistors anymore,” Segars said. “So, you have to get really creative when it comes to architecture.” Adding another dimension via 3D integration and clever die packaging represent a possible path forward.

That approach has been used to develop chiplets in which individual dice are integrated into a single package with connections between them. Still, design challenges remain. “We need to make [chiplet] technology easier to use, and there is work we can do on the design methodology front to help with that,” Segars said.

Stacking memory, logic and analog components also would allow designers to pack more transistors into the same space, delivering greater performance and improved manufacturing yields. “This ability to go vertical, if we can simplify that, then I think that is going to unlock another dimension of performance,” the ARM CEO said.

The rise of IoT is also raising new concerns about chip security in billions of connected devices. A key requirement for securing IoT networks will be developing standards for hardening devices and growing the microelectronics ecosystem.

ARM previously devoted major resources to IoT security frameworks, including a "secure core" approach to bulletproofing the IoT that addressed network security at the microcontroller level.

“Security is probably the least well-defined aspect of technology,” Segar said. “It’s a changing landscape.” With the potential for deploying billions of potentially vulnerable IoT devices, “We need standards [for] how security is expressed,” he said. “You need to define what you are secured against.”

An IoT security framework driven by industry and government stakeholders would also help streamline product development so that industrial and sensor networks aren’t a hodge-podge of random devices with varying levels of security.

“We’re really trying to turn this into an industry thing,” said Segars.

Edge Computing Seen Transitioning to ‘Intelligent Edge’

0
0

The extension of cloud computing capabilities from datacenters to the somewhat amorphous network edge, variously defined as a connected device, appliance or a network gateway, is morphing into something more than edge computing. With the addition of AI, edge cloud computing is approaching what promoters dub the “intelligent edge,” a construct that can be implemented according to application, while incorporating AI, hyperscale services, low-latency, high-bandwidth connectivity and secure IT services.

The enterprise version of edge computing “likely requires a bespoke set of solutions customized for their operations and goals,” concludes an assessment of the “intelligent edge”by business consultant Deloitte.

Those tailor-made solutions would enable deployment of new intelligent edge services that incorporate machine learning models, custom semiconductors, inexpensive and low-power edge computing and broadband pipelines for data analytics applications.

Moving those cloud-based capabilities out from datacenters and 5G wireless base stations, for instance, would enhance data analytics capabilities via closer proximity to where sensor data is collected and stored.

According to the Deloitte assessment, “The intelligent edge is not a replacement for enterprise or hyperscale cloud datacenters but [is rather] a way to distribute tasks across the network” based on connectivity, security and priority.

The combination of edge computing and network intelligence opens up a range of low-latency applications such as remote control of industrial cobots, or collaborative robots, to drones and, perhaps self-driving vehicles.

The shift to the edge is also scaling down processing power to the footprint of micro-datacenters, or what the industry analyst refers to as a “supercomputer in a brief case.” That configuration would provide processing, storage and networking. Those capabilities are then augmented by network virtualization tools.

Most of the intelligence—and therefore, automation—embedded in edge networks derives from the steady advances and growing adoption of AI-enabled automation. Once an edge appliance is connected to an intelligent edge platform, it can operate autonomously.

“Increasingly, chips that are specialized and optimized to run AI and machine learning tasks are moving into edge appliances,” the study notes. Along with ubiquitous graphics processors, “edge-specific” chip architectures are emerging based on Tensor processing units, ASICs and even neuromorphic chips.

Those building blocks can be tailored to a range of intelligent edge applications spanning general-purpose programmability to specific tasks.

“Edge AI is also complementary to cloud AI, with reaction at the edge and learning in the core,” the report notes, (with italics in the original). “Resource-intensive training of algorithms can be done in the cloud and then shared out to the edge where lighter inference capabilities can quickly act on data.”

Percentage reporting advanced wireless technology is very or extremely important for deploying the technologies shown. Source: Deloitte

The other intelligent edge driver is the emergence of 5G wireless networks, delivering greater bandwidth and greatly reduced latency. The Deloitte study argues that the intelligent edge is driving adoption of 5G and Wi-Fi 6 routers targeting edge applications.

Sixty-two percent of executives surveyed told Deloitte they are deploying or plan to deploy the next-generation wireless technologies within the next year.

Indeed, those advanced wireless pipelines and connections are widely viewed as a “force multiplier” for enabling the components of the intelligent edge: AI, big data analytics, Internet of Things and the ability to move cloud computing to the network edge.

“As more industry leaders adopt and deploy the intelligent edge, more use cases and innovations will no doubt emerge,” the Deloitte study concludes. “How this evolution reshapes networks, services, machines, and the built environment will play out over the next decade.”

Predictive Analytics Firm C3.ai Files for IPO

0
0

C3.ai, the predictive analytics firm founded by CRM giant Tom Siebel, Friday announced plans for an initial public offering (IPO) of stock. It intends to trade shares on the New York Stock Exchange under the ticker symbol “AI.”

While the rest of the big data world was focused on using open source software like Spark and Hadoop to build giant clusters, Siebel was quietly assembling his own cloud-based application for collecting and analyzing huge amounts of data at scale.

Founded in 2009 as C3 IoT, the company successfully attracted several large public utilities to its platform. It eventually added a host of larger customers, including banks, healthcare companies, manufacturers, and oil and gas companies to its customer roll.

In fiscal year 2020, C3.ai reported $157 million in revenue, delivering year-over-year growth of 71%, according to its S-1 filed with the SEC today. Software subscriptions for the company’s two major offerings--C3 AI Suite, a general purpose development and runtime environment for enterprise AI apps, as well as C3 AI Applications, a collection of industry- and application-specific shrink-wrapped AI apps--accounted for nearly 90% of that revenue.

Running across all the major cloud, C3.ai says it generates 1.1 billion predictions per day on behalf of its customers. It says it has 4.8 million machine learning models in production, powered by data coming from 622 million sensors.

It’s unclear how many customers C3.ai has, but we know the customers are large. According to the S-1, the average deal size for the past five years were $1.2 million, $11.7 million, $10.8 million, $16.2 million, and $12.1 million, respectively. “We believe this is a high-water mark for the applications software industry,” the company states in its S-1.

Tom Siebel, CEO and founder of C3.ai

The potential market for predictive analytics is $174 billion this year, and will grow to more than $270 billion by 2024, Siebel says. The plan calls for C3.ai to capture as much of that emerging market as Siebel Systems (and subsequently Oracle) did for the CRM market back in the early 2000s.

In the S-1 form, Siebel provided this assessment of the economic opportunity provided by recent technological breakthroughs in AI, cloud computing, and big data:

“Assessing the IT landscape at the beginning of the 21st century, it became apparent that a new set of technologies was destined to constitute another step function that would change everything about the information processing world, dramatically accelerating the growth of IT markets,” he wrote.

“This step function of technologies – substantially more impactful than anything we had seen before – included: elastic cloud computing, big data, the internet of things, and AI or predictive analytics. Today, at the confluence of these technology vectors we find the phenomenon of Enterprise AI and Digital Transformation, mandates that are rising to the top of every CEO’s agenda.

It is perhaps not surprising that one enterprise computing trend did not make that list: open source software development. In a 2017 Q&A with Datanami, Siebel did not hide his disdain for the big data trend of the day. He said:

“Some of them do useful things. HDFS is a perfectly good distributed file system. Spark is great for in-memory virtualization of data, and Mesos is great as a virtualization layer. These things are all kind of useful. But the idea of some IT organization cobbling all that stuff together into a system that works is patently absurd. It’s more absurd than if I were to buy 70 commercially viable companies. It’s impossible.”

Morgan Stanley, J.P. Morgan and BofA Securities are acting as lead book-running managers for the proposed offering, while Deutsche Bank Securities is acting as a book-running manager. Canaccord Genuity, JMP Securities, KeyBanc Capital Markets, Needham & Company, and Piper Sandler are acting as co-managers for the proposed offering.

This article first appeared on sister website Datanami. 

Don’t Be Too Quick Trusting That 2020 Data in Your 2021 IT Planning

0
0

As companies approach their 2021 business planning after the current tumultuous, pandemic-decimated year, it might be a wise idea to use plenty of care when using 2020 business data to make any kinds of projections for the new year.

That's the recommendation from data intelligence vendor, Alation, which recently conducted its second State of Data Culture survey in 2020 to learn how companies are coping with data-based business decisions as the COVID-19 pandemic continues.

About two-thirds of the 300 respondents to the survey reported that they are leery about the validity and accuracy of their 2020 business data for conducting their 2021 planning, according to the 31-page quanitative report, which was conducted by Wakefield Research between Nov. 2 and Nov. 16. The respondents included data and analytics leaders at companies with more than 2,500 employees in the U.S., U.K., Germany, Denmark, Sweden and Norway.

“Because 2020 is considered anomalous, two-thirds of data professionals (64%) say they are concerned about relying on this year’s data for planning purposes,” the company says in the report, which was released Dec. 17.  “Even as companies are adjusting to ‘the new normal,’ they question how to use what happened in 2020 for planning purposes. Adjust for it? Throw it out? Use other sources of data?”

Alation debuted its first State of Data Culture report in September as a way to measure the progress organizations are making in implementing a data culture. The report and associated survey aim to measure the maturity and pervasiveness of various components that, according to Alation, collectively comprise data culture, including things like data search and discovery, data literacy and data governance.

Less than 10% of organizations have adopted those three pillars of data culture across all of their departments, according to the latest quarterly report. And there continues to exist a data culture “disconnect,” in which leaders of organizations think their data culture is better than it really is.

However, organizations may be forgiven for taking a purely data-driven approach to planning for 2021, thanks to the anomaly that is 2020. In lieu of by-the-book data-driven forecasting and budgeting, organizations are taking an “all of the above” approach to guide their planning, the report continues.

According to Alation, 55% of business leaders are using economic or financial news to drive their decisions, while 50% are tapping into data from 2019 or earlier. More than half are watching their competitors’ activities, while less than half are seeking third-party insights and news about the pandemic.

The study found that, overall, 15% of data professionals say their organizations were prepared to operate in a crisis, while the rest were somewhat unprepared, mostly unprepared, or completely unprepared. Among organizations with the lowest data culture scores, the fraction that were prepared for a crises drops to just 2%.

COVID-19 also contributed to a significant decline in “tribal knowledge” among organizations, with 66% of study participants saying they lost a lot or some critical knowledge due to staffing changes that were caused by the pandemic.

“The loss of this tribal knowledge can impact the company for years to come– causing confusion, and bad decisions based on misunderstood analytics,” Aaron Kalb, Alation’s co-founder and its chief data and analytics officer, says in a press release. “Companies must capture this knowledge and share it across the enterprise to become data-driven and achieve successful business outcomes.”

It wasn’t all bad news, however, and progress was made in the fourth quarter in the Data Culture Index (DCI), Alation says. The survey found that 35% percent of companies were ranked as having a top-tier data culture, up from 33% in the third quarter, and 30% were ranked as having a low-tier data culture, down from 35% previously.

There is also some reason to be optimistic about staffing plans for 2021, according to the report. While the forward-looking projections may be unreliable for uses, 52% of survey respondents said they still believed their company will be hiring in 2021, and 72% said they expect data and analytics budget to be fully restored in 2021.

This article first appeared on sister website, Datanami. 

Noogata Secures $12M Seed Funding Round for its No-Code Enterprise AI Platform

0
0

The combination of AI and no-code software development platforms are accelerating efforts to scale enterprise applications, including data collection, modeling and analysis.

Those are among the goals of Noogata, a no-code AI startup targeting enterprise data analytics tools that can be built by non-programmers.

The two-year-old startup based in Tel Aviv, Israel, announced a $12 million seed funding round this week led by Team8 Capital with participation from Skylake Capital.

Low-code for AI is touted as allowing novice developers and data scientists alike to use AI building blocks to quickly spin up enterprise applications. Noogata’s framework initially focuses on data analytics tools that could be used across a company’s operations. Other use cases are in the pipeline, the startup said Tuesday (March 16).

“A user would use our platform to select a use case and deploy the different blocks within,” Assaf Egozi, Noogata’s co-founder and CEO, told EnterpriseAI.com. “Then [they would] connect it to their enterprise data—typically from their data warehouse—and automate it end-to-end on our platform. The customer doesn't need to design or code the models and other parts of the analytics pipeline.”

The startup notes its no-code AI platform can be integrated with existing enterprise data systems, eliminating the need for internal development while expanding capabilities beyond narrow, off-the-shelf tools.

Those AI-based capabilities also would allow novice users to expand beyond traditional business intelligence tools to forge new “self-service” analytics tools, Egozi said.

Along with scaling enterprise data analytics, the startup is also looking to extend is no-code AI framework to other applications. “We are also working on use cases to drive automation and personalization using prediction,” Egozi added. Examples include demand forecasting to optimize manufacturing.

Noogata’s startup team includes a mix of veterans who previously worked at Amazon Web Services (NASDAQ: AMZN), Cisco Systems (NASDAQ: CSCO), the Israeli software startup IronSource and Totango.

The team also includes business consulting veterans from KPMG and McKinsey & Company. “Our startup team is fairly unique as it has equal parts of engineer [and] data-scientist,” Egozi said.

Torch.AI Secures $30M Series A Funding Round to Expand its Nexus AI Platform

0
0

Torch.AI, the profitable startup applying machine learning to analyze data “in-flight” via its proprietary synaptic mesh technology, announced its first funding round along with expansion plans.

The Series A round garnered $30 million, and was led by San Francisco-based WestCap Group. As its customer base expands, Torch.AI said Wednesday (March 17) it would use the funds to scale its Nexus AI platform for a customer base that includes financial services, manufacturing and U.S. government customers. The three-year-old AI startup’s software seeks to unify different data types via its synaptic mesh framework that reduces data storage while analyzing data on the fly.

“There’s just too much information, too many classes of information,” said Torch.AI CEO Brian Weaver. That's where Torch.AI can help enterprises that are dealing with regulatory and other data governance pressures as they find that they can’t trust all the data they store, he said.

Working early on with companies like GE (NYSE: GE) and Microsoft (NASDAQ: MSFT) on advanced data analytics, Weaver asserted in an interview that current technology frameworks compound that complexity. The shift to AI came while working with a financial services company struggling to process huge volumes of real-time transactions.

“We figured out that we could use artificial intelligence just to understand the data payload, or the data object, differently,” Weaver said.

The result was its Nexus platform that creates an AI mesh across a user’s data and systems, unifying data by “increasing the surface area” for analytics. That approach differs fundamentally from the “store and reduce” approach in which information is dumped into a large repository, which then applies machine learning to make sense of it to cull usable data.

“I’ve got to store it somewhere first, then I’ve got to reduce [data] to make use of it,” the CEO continued. That approach “actually compounds [data] complexity…impedes a successful outcome in a lot of ways and introduces at the same time a lot of risk.”

Torch.AI CEO Brian Weaver

Torch.AI’s proprietary synaptic mesh approach is touted as eliminating the need to store all that data, enabling customers to analyze the growing number of data types “in flight.”

“We decompose a data object into the atomic components of the data,” Weaver explained. “We create a very, very rich description of the data object itself that has logic built into it.” The synaptic mesh is then applied to process and analyze data. For example, a video file could be used to analyze data in-memory, picking out shapes, words and other data components as it streams.

The AI application builds in human cognition to make sense of a scene. “My brain doesn’t need to store it, the scene, to determine what’s in it," Weaver noted. “That’s sort of our North Star: Making sense of messy data” by applying AI to unify the growing number of data types while reducing the resulting complexity. “If you think about these workloads, people are actually working for the technology, having to stitch all this stuff together and hope it works. Shouldn’t the technology truly be serving the [customer] who has the problem?”

That’s the startup’s focus as it uses seed funding to scale its AI platform. Lead investor Westcap noted Torch.AI’s early profitability and federal certifications. The startup works with several government agencies, including the departments of Agriculture and Defense along with the Centers for Medicare and Medicaid Services.

“As an AI company, we’re unique,” Weaver added. “We’re profitable. We had to have customers who were willing to pay.”

Enterprises Needing Accelerated Data Analytics and AI Workloads Get Help from Nvidia and Cloudera

0
0

In April, Nvidia and Cloudera unveiled a new partnership effort to bring together Nvidia GPUs, Apache Spark and the Cloudera Data Platform to help customers vastly accelerate their data analytics and AI workloads in the cloud.

After a few months of fine-tuning and previews, the combination is now generally available to customers that are looking for help in speeding up and better managing their critical enterprise workloads.

“What we've announced and made available is the packaging of that solution in Cloudera’s CDP private cloud-based data platform,” Scott McClellan, senior director of Nvidia’s data science product group, said in an online briefing with journalists on Aug. 3. “All the users can get access … via the same mechanisms they use in general to install Spark 3.0 in their Cloudera Data Platform” architecture.

Scott McClellan, Nvidia

Enabling the combination of technologies is the work that Nvidia has been doing in the past few years to bolster GPU acceleration and transfer parent acceleration using GPUs of Apache Spark workloads in the upstream Apache Spark community, said McClellan.

Customers have been asking for this kind of integration to help them ease the process in their workflows, said McClellan.

“From an IT perspective, we are seeing quite a bit of demand to just simplify the whole experience,” he said. Many enterprises have moved such operations to the cloud because of the “no-touch, self-service” nature of the cloud, he added. “We want to bring a lot of that same experience to enterprises in a hybrid model which is a key focus of the Cloudera Data Platform. There is high demand from IT environments for simple integration and solutions like this.”

Sushil Thomas, the vice president of machine learning at enterprise data cloud vendor Cloudera, agreed.

Sushil Thomas, Cloudera

“[Customers] are screaming for GPUs so they can do all of that work a lot faster,” said Thomas. “The partnership we have will help a lot. And then on the data engineering side, SQL and GPUs and accelerated SQL access is always big as well. There it is more of what is possible and what was not possible before … all the work Nvidia has done in the Spark ecosystem and before the integration that we have done for that within the current platform. Of course, faster SQL, using the latest hardware, is always requested.”

The April partnership announcement from Nvidia and Cloudera laid out plans for the integration of Nvidia’s accelerated Apache Spark 3.0 platform with the Cloudera Data Platform to allow users to scale their data science workflows. The two companies have been working together since 2020 to deploy GPU-accelerated AI applications using the open source RAPIDS accelerator suite of software libraries and APIs, which give users the ability to execute data science and analytics pipelines entirely on GPUs across hybrid and multi-cloud deployments.

The latest Spark 3.0.3 version of Spark is the first release offering GPU acceleration for analytics and AI workloads. Apache Spark is a unified analytics engine for big data processing, with built-in modules for streaming, SQL, machine learning and graph processing.

RAPIDS is licensed under Apache 2.0 and involves code maintainers from Nvidia who continually work on the code for the project. RAPIDS includes support for multi-node, multi-GPU deployments, enabling vastly accelerated processing and training on much larger dataset sizes.

The product integration is aimed at enterprise data engineers and data scientists who are looking to overcome bottlenecks created by torrents of increasingly unstructured data. GPU-accelerated Spark processing accessible via the cloud aims to help break logjams that slow the training and deployment of machine leaning models.

“Nvidia has already done a lot of work in Apache Spark to transparently accelerate Spark workloads on GPUs where possible,” said Thomas. “Cloudera integrates this functionality into Cloudera Data Platform so that all of our customers and all of their data have access to this acceleration, without making any changes to that application. That is important so customers do not have to go in and rewrite things for their existing applications to work.”

To use the services, a customer can add a rack of GPU servers into their existing cloud cluster and the Nvidia hardware and Cloudera take care of the work, said Thomas.

“Any Spark SQL workloads that can now take advantage of GPUs will use them without any required application changes,” he said. “And this is in addition to any machine learning workloads that will also be accelerated by the GPUs.”

This all allows customers to do more on the machine learning side, he said. “This means more compute power on model training and getting to better model accuracy on the data engineering side. It means accelerated processing with 5x or [higher] full stack acceleration for a data science workload. This means you can do five times more in the same data center footprint, which is a huge gain.”

By integrating Nvidia GPUs, Spark and the Cloudera Data Platform for users, it will allow enterprises to get the benefits of the combination without having large IT staffs that specialize in Spark, RAPIDS and other technologies that may not be as familiar, said McClellan.

“We see a big driver for mainstream enterprise that do not have … a team of Spark committers or a team of skilled solution integrators,” he said. “They should be able to get the same solutions that are more turnkey.”

Tony Baer, analyst

Tony Baer, principal analyst with dbInsight LLC, told EnterpriseAI that for enterprises, such pre-packaged technology partnership offerings are becoming a trend.

“It is Cloudera’s and Nvidia’s attempt to show that in private cloud, you can get all of the optimizations that you could get running from the cloud providers that have their own machine learning services running on their custom hardware,” said Baer. “It will eliminate a lot of the blocking and tackling at the processing level” for customers.

This does not mean, however, that customers will wash their hands completely of the involved processes, said Baer. “That still leaves all the blocking and tackling that they still must otherwise do with the rest of the machine learning lifecycle, from building the right data pipelines to choosing the right data sets, problem design, algorithm selection, etc. This just eliminates the processor part, just like with the cloud machine learning services.”

In June, Cloudera was acquired for $5.3 billion in a move that will make it a private company. The deal, which is expected to close in the second half of 2021, will sell the company to affiliates of Clayton, Dubilier & Rice and KKR in an all cash transaction.


The State of AI and the HPC Connection: A Talk with Hyperion Research’s Steve Conway

0
0

Looking for an AI refresher to beat the Summer heat? In this Q&A, Hyperion Research Senior Adviser Steve Conway surveys the AI and analytics landscape in a time of intense activity and financial backing. Just last week, the National Science Foundation (NSF) announced it had expanded the National AI Research Institutes program to 40 states (and the District of Columbia) as part of a combined $220 million investment. What is all this attention and investment leading up to? What is significant right now? What’s the HPC connection? Keep reading for insights into the questions everyone’s asking.

HPCwire: How would you describe the status of AI today?

Conway: AI is at an early developmental stage and is already very useful. The mainstream AI market is heavily exploiting early AI for narrow tasks that mimic a single, isolated human ability, especially visual or auditory understanding, for everything from Siri and Alexa to reading MRIs with superhuman ability.

HPCwire: What’s the eventual goal for AI?

Conway: The goal over time is to advance toward artificial general intelligence (AGI), where AI machines are versatile experiential learners and can be trusted to make difficult decisions in real time, including life-and-death decisions in medicine and driving situations. Experts debate what it will take to get there and whether that will happen. Hyperion Research asked noted AI experts around the world about this in a recent study. The sizeable group who believe AGI will happen said, on average, it will take 87 years. There was an outlier at 150 years. But whether or not it happens, AGI is an important aspirational goal to work toward.

Steve Conway, Hyperion Research

HPCwire: What role does HPC play in AI?

Conway: HPC is nearly indispensable at the forefront of AI research and development today, for newer, economically important use cases as well as established scientific and engineering applications. One reason why HPC is attracting more attention lately is that it is showing where the larger, mainstream AI market is likely headed in the future. The biggest gifts HPC is giving to that market are 40-plus years of experience with parallelism and the related abilities to process and move data quickly, on premises and in more highly distributed computing environments such as clouds and other hyperscale environments. The HPC community is also an important incubator for applying heterogeneous architectures to the growing number of heterogeneous workflows in the public and private sectors.

HPCwire: Reversing that question, what role does AI play in HPC?

Conway: A recent Hyperion Research study showed that nearly all HPC sites around the world are now exploiting AI to some extent. Mostly, they’re using AI to accelerate established simulation codes, for example by identifying areas of the problem space that can be safely ignored. In cases where the problem space is an extremely sparse matrix, this heuristic approach can be especially helpful. HPC-enabled AI is also used for pre- and post-processing of data.

HPCwire: What’s the relationship between analytics and simulation in HPC-enabled AI?

Conway: Some applications use analytics alone, but many HPC-enabled AI applications benefit from both data analytics and simulation methodologies. Simulation isn’t becoming less important with the rise of AI. This frequent pairing of simulation and analytics says that HPC system designs need to be compute-friendly and data-friendly. Newer designs are starting to reverse the increasing compute-centrism of recent decades and establish a better balance.

HPCwire: Reversing that question, what role does AI play in HPC?

Conway: A recent Hyperion Research study showed that nearly all HPC sites around the world are now exploiting AI to some extent. Mostly, they’re using AI to accelerate established simulation codes, for example by identifying areas of the problem space that can be safely ignored. In cases where the problem space is an extremely sparse matrix, this heuristic approach can be especially helpful. HPC-enabled AI is also used for pre- and post-processing of data.

HPCwire: What’s the relationship between analytics and simulation in HPC-enabled AI?

Conway: Some applications use analytics alone, but many HPC-enabled AI applications benefit from both data analytics and simulation methodologies. Simulation isn’t becoming less important with the rise of AI. This frequent pairing of simulation and analytics says that HPC system designs need to be compute-friendly and data-friendly. Newer designs are starting to reverse the increasing compute-centrism of recent decades and establish a better balance.

Read the rest of this article on our sister website HPCwire.

Bio: Steve Conway is Senior Adviser of HPC Market Dynamics at Hyperion Research. Conway directs research related to the worldwide market for high performance computing. He also leads Hyperion Research’s practice in high performance data analysis (big data needing HPC).

 

 

 

IBM Watson Health Finally Sold by IBM After 11 Months of Rumors

0
0

IBM has sold its underachieving IBM Watson Health unit for an undisclosed price tag to a global investment firm after almost a year’s worth of rumors that said IBM has been trying to exit this part of its business.

In a terse Jan. 21 announcement, IBM said that Francisco Partners is acquiring the healthcare data and analytics assets from the IBM Watson Health business unit, including Health Insights, MarketScan, Clinical Development, Social Program Management, Micromedex, and imaging software offerings.

Rumors about IBM wanting to sell its Watson Health unit – which reportedly brought in $1 billion in revenue annually but has failed to make a profit – have been circulating in the press at least twice since at least February of 2021. The reports said the move was being eyed so that Big Blue could get out of the healthcare market and focus its operations and sights on the lucrative cloud computing market.

A Jan. 21 report on the sale by Bloomberg said the value of the assets involved in the transaction total more than $1 billion, according to people familiar with the plans.

According to IBM’s announcement, which is the first time that the company has commented on a possible sale of IBM Watson Health since the rumors began, the transaction is expected to close in the second quarter of this year and is subject to customary regulatory clearances.

What is unclear from the company’s press release is whether the sale includes all the analytics and data holdings from Watson Health or if IBM will retain any part of that business at all. The release does not give any further details on the nature of the sale.

Timothy F. Davidson of IBM corporate communications did not respond directly to those specific questions when he replied Jan. 21 to an emailed inquiry from EnterpriseAI.

“The transaction announced today will result in healthcare data and analytics assets that are currently part of the Watson Health business, transferring ownership, upon closing (expected in 2Q22), to Francisco Partners,” Davidson wrote. “Also, upon close, the new standalone company is expected to continue its work as a healthcare AI, data and analytics business delivering industry-leading software, technology and automation solutions across the healthcare value chain.”

Another IBM executive, Tom Rosamilia, a senior vice president with IBM Software, said in a statement that the sale of the Watson Health assets to Francisco Partners “is a clear next step as IBM becomes even more focused on our platform-based hybrid cloud and AI strategy. IBM remains committed to Watson, our broader AI business, and to the clients and partners we support in healthcare IT. Through this transaction, Francisco Partners acquires data and analytics assets that will benefit from the enhanced investment and expertise of a healthcare industry focused portfolio.”

Under the terms of the agreement, the current management team will continue in similar roles in the new standalone company, serving existing clients in life sciences, provider, imaging, payer and employer, and government health and human services sectors, according to IBM and Francisco Partners.

Analysts Respond

Dan Olds, analyst

Dan Olds, the chief research officer for Intersect360 Research, told EnterpriseAI that the sale of the Watson Health assets must be a disappointment to IBM.

“Watson Health was always the example that IBM pointed to when discussing how Watson was going to change the world,” said Olds. “Fast forward seven years and we find IBM selling off its Watson Health unit, which was supposed to be the crown jewel of the Watson product line.”

And though the unit has cracked $1 billion in revenue for the company, “a portion of that can be attributed to several billion dollar-plus acquisitions that bolstered the bottom line for the division,” said Olds. “However, Watson Health failed to make a profit, despite huge loads of marketing hype, thus placing it on the chopping block. There were some notable failures for Watson over the years including a five-year relationship with MD Anderson Cancer Center that ended after MD Anderson alleged that Watson did not provide safe and correct treatment recommendations. Ouch.”

The problem for IBM, said Olds, is that the company “led with marketing and the marketing vision was way beyond what their technology could deliver. AI can absolutely be a big boon to healthcare, but not the way that IBM implemented it. IBM paid more attention to gathering data and mining academic papers and less attention to consulting with real doctors who are seeing patients daily.”

Instead of taking the time to learn from the actual doctors who were supposed to work with Watson, IBM “devoted their time and resources to a top-down approach that they sold to administrators,” said Olds. “They not only put the cart before the horse, but they piled the cart high with visions of revolutionary improvements in patient care that did not come close to materializing. So, rather than radically disrupting healthcare technology, IBM’s Watson Health goes out with a whimper and will probably become a cautionary tale on how to not introduce new tech.”

Rob Enderle, analyst

Another analyst, Rob Enderle, principal of Enderle Group, said that Watson Health was never a battle that IBM was structured to win.

“The cost of maintaining a healthcare business, particularly during a pandemic, is daunting mainly because of the inability to access critical medical information to provide proper diagnosis and treatment,” said Enderle. “To do this successfully needs focus and investment, and IBM, which is undergoing a turnaround, had to pick its battles. Thus, the medical assets that IBM accumulated – which are significant – are more valuable to an entity like Francisco Partners that can make better use of them through their other investments.”

IBM’s interest in selling Watson Health was seen as part of a strategy by CEO Arvind Krishna to streamline the company and become more competitive in cloud computing and other markets.

The Watson Health unit integrates AI, analytics and data to create augmented intelligence for hospitals, insurers and pharmaceutical companies.

IBM Watson Health’s financial performance has been a concern for IBM’s bean-counters in the past as well. In April of 2019 IBM halted the development and sales of its Watson AI drug discovery tools, citing disappointing sales, according to an earlier EnterpriseAI story. With the move, the company shifted the focus of its Watson Health offering to “clinical development” as it readjusted its market strategy. That move came amid reports of declining sales and growing skepticism about the utility of machine learning for complex medical research, the story reported.

IBM’s troubles with Watson Health came at a time when competitors were finding success in the health care market. In December of 2021, Oracle acquired Cerner Corp. for $28.3 billion, which made the company a major player in electronic healthcare records (EHR).

 





Latest Images