Ben Hopkins

Senior Product Manager

September 26, 2016
Operationalize Spark and Big Data with Pentaho’s Newest Enhancements
Over the last 18 months or so, we at Pentaho have witnessed the hype train around Spark crank into full gear. The huge interest in Spark is of course justified. As a data processing engine, Spark can scream because it leverages in-memory computing. Spark is flexible – able to handle...
August 19, 2016
Filling the Data Lake with Hadoop and Pentaho
A blueprint for big data success What is the “Filling the Data Lake” blueprint? The blueprint for filling the data lake refers to a modern data onboarding process for ingesting big data into Hadoop data lakes that is flexible, scalable, and repeatable. It streamlines data ingestion from a wide variety...
June 22, 2016
Do you Have a Data Silo Problem?
The Problem of Data Variety If you ask organizations what data problems they face, the most common answer isn’t “big” data problems, or “real-time” data problems: it’s data variety. Data variety often comes from diverse data types, formats, and sources. When data is varied, it is also often siloed away...
January 8, 2016
5 Big Data Favorites from 2015
The past year has marked the continued evolution in Big Data adoption. We’ve seen more and more Pentaho customers successfully putting all of their data to work – driving revenue growth, operational efficiency, and better customer experiences. At the same time, we realize not every organization is at the same...
June 9, 2015
Pentaho 5.4 and Your Data Architecture Evolution
Pentaho version 5.4 is here, and it’s all about future-proofing your data architecture and your organization. Before you dismiss future-proofing as just another example of business-speak, let me give you an example that shows what we mean and why this idea is so central to Pentaho as a platform for...