- You are here:
- GT Home
Dec 18, 2012 | Atlanta, GA
Imagine a back-office banking employee hard at work at data analysis. Using a spreadsheet, she pores over the latest extraction from the “big data” of total transactions. Her focus: a tiny subset of seemingly routine banking transactions that may be the latest entries in an elaborate, multi-continent money-laundering scheme. Happily for the bank employee, a complex software program using automated machine learning has already culled through the vast universe of potentially suspicious transactions. What once took dozens of investigators many months to do by hand, the artificial intelligence technology does—with fewer errors—in days.
To Coca-Cola Chair in Engineering Statistics Jeff Wu, this kind of big data mining is only one example of the potential in exploring the vast store of information accumulated by millions of business and consumer transactions in modern life. The banking example is a real one, developed by Wu and colleagues, including a senior vice president with Bank of America, which later commercialized the product and used it to save millions of dollars through better identification of money-laundering fraud.
“We have had big data since the days of the NCR cash register and the automotive assembly line,” says Wu. “But in the early years, retailers and manufacturers were not thinking about how to use it.” Today, with huge quantities of data collected and stored via the Internet, the challenge is no longer on collecting data, but on figuring out ways to use it for better decision-making in a wide range of fields. “We need the data to make sense,” he says. “We have data collected by Google, Amazon, Yahoo, Facebook—what can we do with it? It’s not just a computer science challenge; it’s a statistical and industrial engineering challenge as well.”
This article first appeared in the 2012 edition of the ISyE Alumni Magazine.