Contact Us
Blog

Mapping Science Networks and Projects to Limit the Rise in Global Temperatures

When the United Nations released a report earlier this year that a catastrophic two-degree Celsius (3.6-degree Fahrenheit) rise in global average temperatures is expected to occur in the next decade, there was a media firestorm about the dire predictions. You know who wasn’t surprised? Climate scientists. (Read about the difference a half-a-degree can make.)

Read More »

Finding Balance on the Spectrum Between Lone Geniuses and Team Scientists

Foreword by Jill Macchiaverna, Exaptive Media Specialist:

When we came across this podcast that so beautifully and concisely addressed many of the challenges our collaborators face, we knew we had to find a way to share it. A huge thank you to AIBS and BioScience Talks for allowing us to post their September 12, 2018, episode called “Big Data is Synergized by Team and Open Science.”

Read More »

The Sticky Note Exercise

How do you pick who works together, who reports to whom, and who exchanges information with whom? Usually it gets done within a department, within a project team, or based on some other common ground. It turns out we should be focusing on our differences a bit more.

Read More »

Innovation Management: The Value of Seeing What You Have

If your job is to get your company, team, or community to innovate, you know how organizational forces can make it hard to even try something new. Visualizing the resources available is an effective first step in overcoming some of those organizational forces. Simply being able to see, and show, what you have allows you to make a compelling case for marshaling resources and even spark some initial interactions in that direction.

Read More »

If every ‘new’ idea is derivative, derive them.

Everything is derivative. Take advantage of that. “New” ideas are the next step in an extensive network of existing people and ideas. If we can get the data and reconstruct the network, we can analyze it and understand where branches of a network have the potential for innovation.

Read More »

How Software Can Augment Human Collaboration

Innovation requires collaboration, but collaboration is stuck in a rut. Data science can help us climb out. It can increase the scale, the intentionality, and the nuance of how we collaborate. With the right data and algorithms, we can set our teams up to do something innovative.

Read More »

Using Science to Build a Dynamic Collaboration Engine

“Good ideas are getting harder to find,” Exaptive CEO Dave King quotes a recent paper by MIT and Stanford researchers. He points to the skyrocketing number of researchers employed in the U.S. and contrasts it with the inverse slope on a chart monitoring efficiency of researchers along the same timeline. “Those growing number of researchers are failing to produce value that outpaces what we’re spending to innovate.”

Read More »

How Data Visualization Supports the Formation of Better Hypotheses

Since Exaptive launched in 2011, we’ve worked with many researchers, particularly in medicine and the natural sciences. PubMed®, a medical journal database, pops up repeatedly as a key tool for these researchers to develop hypotheses. It’s a tool built in a search-and-find paradigm with which we’re all familiar. Execute a keyword search. Get a list of results. Visualization can make search - and, therefore, research - much more meaningful.

Read More »

Modern Research: Faster Is Different

Faster is different. It sounds strange at first because we expect faster to be better. We expect faster to be more. If we can analyze data faster, we can analyze more data. If we can network faster, we can network with more people. Faster is more, which is better, but more is different.

Read More »

Machine Learning Helps Humans Perform Text Analysis

The rise of Big Data created the need for data applications to be able to consume data residing in disparate databases, of wildly differing schema. The traditional approach to performing analytics on this sort of data has been to warehouse it; to move all the data into one place under a common schema so it can be analyzed.

This approach is no longer feasible with the volume of data being produced, the variety of data requiring specific optimized schemas, and the velocity of the creation of new data. A much more promising approach has been based on semantic link data, which models data as a graph (a network of nodes and edges) instead of as a series of relational tables.

Read More »

How To Use PubMed®: The New Way

The Researchers, Principal Investigators, and Science Writers we’ve talked to seem to have a love-hate relationship with PubMed. They love that a simple search can get them quick access to the latest articles but they hate the limiting interface and how much reading is required to find good articles. They told us they aren’t always sure they got all the right articles and that they want a more efficient and customized system that meets more of their needs and gets better over time as it learns from their behavior.

Read More »

Owning the full-stack: A homesteading analogy on software, innovation, and freedom

Have you ever met a homesteader who owns a mansion? Me either. My neighbor, Bill (80), is a homesteader who tries to be as self sufficient as possible. From what I can see, it’s an immensely rewarding and humble existence. Life-satisfaction oozes out of his every pore and, eventually, even enduring the hardships must have become rewarding to him.

He was wearing an interesting smile when he told me that for 20 years the only inputs to the property were paper goods (read as: toilet paper) and that they don’t have any source of heat other than wood, which he cuts off his own property. Homesteading is immensely hard and it’s not for everyone. Homesteaders don’t have the time to live a life of luxury because homesteading means you have to own all the problems of life. The problems of food and shelter, and producing enough value to trade for things you can’t produce yourself. I think this is similar to how a full stack developer has to own the problems of the whole stack.

Read More »

Cognitive What?! Explaining How to Assemble a Team for Collaboration

So many fantastic quotes are attributed to Albert Einstein. If you hear our CEO Dave King speak, he may bring up his favorite: “Combinatory play seems to be the essential feature in productive thought.” To have an aha moment, we have to play with a challenge from a variety of perspectives. We have to build collaborative teams to tackle complex problems.

Figuring out how to build the team with the greatest chance for success can be complicated. Ideal innovation partners may be isolated geographically, in different time zones, or just not aware of the skills their coworkers can bring to a project. At Exaptive, our main goal is to facilitate innovation. We use sophisticated technology to help groups assemble research teams for collaboration, and we've found we can demonstrate the concept on paper. Cue the choir as the gates open to Exaptive’s Cognitive City!

Read More »

Moving Beyond Data Visualization to Data Applications

One thing we love doing at Exaptive – aside from creating tools that facilitate innovation – is hiring intelligent, creative, and compassionate people to fill our ranks. Frank Evans is one of our data scientists. He was invited to present at the TEDxOU event on January 26, 2018.

Frank gave a great talk about how to go beyond data visualization to data applications. The verbatim of his script is below the video. Learn more about how to build data applications here

Read More »

Exploring Tech Stocks: A Data Application Versus Data Visualization

A crucial aspect that sets a data application apart from an ordinary visualization is interactivity. In an application, visualizations can interact with each other. For example, clicking on a point in a scatterplot may send corresponding data to a table. In an application, visualizations are also enhanced with simple filtering tools, e.g. selections in a list can update results shown a heat map. 

You can already try some linked visualizations to find the perfect taco. Now, we'll look at how some simple filtering elements enhance visualization, using a tech stock exploration xap I built over a couple of days. (A xap is what we call a data application built with the Exaptive Studio.) A few simple, but flexible interactive elements can help transform ordinary visualizations into powerful, insightful data applications. Humble check boxes and lists help produce extra value from charts and plots.

Read More »

A Graph-Analytics Data Application Using a Supercomputer!

We recently had to prototype a data application over a supercomputer tuned for graph analysis. We built a proof-of-concept leveraging multiple APIs, Cray’s Urika-GX and Graph Engine (CGE), and a handful of programming languages in less than a week.

Read More »

A Data Application for the _______ Genome Project

The ability to reuse and repurpose - exaptation - is often a catalyst for exciting breakthroughs. The Astronomical Medicine Project (yes, astronomical medicine) was founded on the realization that space phenomena could be visualized using MRI software, like highly irregular brains. The first private space plane, designed by Burt Ratan, reenters the atmosphere using wings inspired by a badminton birdie. Anecdotes like this abound in many fields, and the principle applies to working with data and creating data applications, as much as it does any innovation. 

To demonstrate and give our users a running start at successfully repurposing something, we want to share an editable data application, the Taco Cuisine Genome AtlasWe held an internal hackathon in which teams had a day to design and build a xap. (A xap is what we call data application built with our platform. Learn a bit more about our dataflow programming environment here.) One team took algorithms and visualizations created for a cancer research application and applied them to tacos. Application users can identify, according to multiple ingredients, specific tacos and where to find them.

The best part is that this wasn't entirely an act of frivolity. Repurposing healthcare and life sciences tools on different, albeit mundane, data led to a potential improvement for the cancer research application - a map visualization of clinical trials for specific cancer types. 

It can't be said enough. New perspective is a key catalyst for innovation. 

So, we've made this xap available for the public to kickstart other work. Explore itbuild off it, and apply it to your own data. You can also learn the basics of how it's done.

Read More »

A Data Exploration Journey with Cars and Parallel Coordinates

Parallel coordinates is one way to visually compare many variables at once and to see the correlations between them. Each variable is given a vertical axis, and the axes are placed parallel to each other. A line representing a particular sample is drawn between the axes, indicating how the sample compares across the variables.

Previously, I wrote how it's possible to create a basic network diagram application from just three components in the Exaptive Studio. Many users will require more scalable from a data application, and fortunately the Studio allows for the creation of something like our Parallel Coordinates Explorer. Often times, a parallel coordinates diagram can also become cluttered, but fortunately, our Parallel Coordinates component lets users rearrange axes and highlight samples in the data to filter the view.

It helps to use some real data to illustrate. One dataset that many R aficionados may be familiar with is the mtcars dataset. It's a list of 32 different cars, or samples, with 11 variables for each car. The list is derived from a 1974 issue of Motor Trend magazine, which compared a number of stats across cars of the era, including the number of cylinders in the engine, displacement (the size of the engine, in cubic inches), economy (in miles per gallon of fuel), and power output.

Let's say we're interested in fuel economy, and want to find out characteristics could signify a car with good fuel economy. Anecdotally, you may have heard that larger engines generate more power, but that smaller engines generate better fuel economy. You may also have heard that four-cylinder engines are typically smaller in size than larger engines. Does this hold true for Motor Trend's mtcars data?

To find out we'll use a xap (what we call a data application made with Exaptive) that lets a user upload either a csv or Excel file and generates a parallel coordinates visualization from the data. But a data application is more than a data visualization. We're going to make a data application that selects and filters the data for rich exploration. 

In our dataflow programming environment, we use a few components to ingest the data and send a duffle of data to the visualization. Then a hand-full of helper components come together make the an application with which an end-user can explore the data.

Here's the dataflow diagram, with annotations. 

Read More »

Use a Network Diagram to Uncover Relationships in Your Data

Often times, when we're looking at a mass of data, we're trying to get a sense for relationships within that data. Who is the leader in this social group? What is a common thread between different groups of people? Such relationships can be represented hundreds of ways graphically, but few are as powerful as the classic network diagram.

Read More »

Finding Netflix's Hidden Trove of Original Content with a Basic Network Diagram

Nexflix has collected an impressive amount of data on Hollywood entertainment, made possible by tracking the viewing habits of its more-than 90 million members. In 2013, Netflix took an educated guess based on that data to stream its own original series, risking its reputation and finances in the process. When people were subscribing to Netflix to watch a trove of television series and movies created by well-established networks and studios, why create original content? Now, few would question the move.

Read More »

Rapid Data Products: Kicking the Tires on IBM Watson in One Day

Late last year I turned the venerable age of 40, and graying and balding jokes aside, I've spent a good bit of time reflecting on the accelerating pace of change in technology. It's not just that things are getting faster, better, cheaper. It's that whole new capabilities are now possible that we could only dream about even a few decades ago. Mail is electronic. A TV and a computer are basically the same thing. And you can talk to your phone.

Read More »

Gleaning Insight from Content with IBM Watson

In recent years, machine learning as a service has come of age, with robust capabilities from Amazon, Google, Microsoft and others now available through REST APIs for a fraction of the cost of deploying or developing your own capabilities. One of the better known – if not easy to separate hype from reality – is IBM Watson. While Watson gained fame as the Jeopardy!-winning supercomputer, IBM now uses the brand for a wide variety of machine learning capabilities from speech-to-text and conversational bots to text mining algorithms for understanding the concepts, references and tone of text-based content.

In this post, we'll cover how to integrate one such IBM service – Natural Language Understanding – and rapidly prototype an application that you can try on your own content. It includes how to get started with IBM's hosted service Bluemix and the Python code to connect to the REST API. I've also included a working data application that you can run with your own text.

Read More »

What is a Data Application?

There are data visualizations. There are web applications. If they had a baby, you'd get a data application.

Data applications are a big part of where our data-driven world is headed. They're how data science gets operationalized. They are how end-users - whether they're subject matter experts, business decision makers, or consumers - interact with data, big and small. We all use a data application when we book a flight, for instance. 

Dave King, our CEO and one of our chief software architects, spoke with Software Engineering Daily about what makes data applications important and best practices for building them. Check out the podcast or read the abridged transcript beneath it. (Learn how they're built, or try building a data application if you'd like.)

Read More »

Alleviating Uncertainty in Bayesian Inference with MCMC sampling and Metropolis-Hastings

Bayesian inference is a statistical method used to update a prior belief based on new evidence, an extremely useful technique with innumerable applications. Uncertainty about probabilities that are hard to quantify is one of the challenges of Bayesian inference, but there is a solution that is exciting for its cross-disciplinary origins and the elegant chain of ideas of which it is composed.

Read More »

When Earth is Like an Egg: 3D Terrain Visualization

Some of the most satisfying breakthroughs happen when technology gets used in a way it was never intended. While working with our graphic design group at Sasaki on ways to generate a dot pattern for a decorative screen, we came across some open-source software called StippleGen. Stippling is a way of creating an image by means of dots. StippleGen was created to optimize stippling for, among other things, egg painting. The software does a great job of laying out dots with greater density on the darker areas of the image while keeping a comfortable spacing between the dots. What's more, the voronoi algorithm it uses gives an irregular, organic pattern. The ah-ha moment came when I realized this could be applied to a different problem, visualizing terrain; specifically, optimizing terrain meshes in 3D software based off elevation data (a.k.a. Digital Elevation Model (DEM)).

 

Here's a typical use of StippleGen:

 
Used to create this: 
 
 

So how do we get from eggs to terrain? A given terrain, unlike an egg, is typically a mix of high variation areas, like canyons, with more uniform areas, like plains or plateaus. A typical DEM heightmap can be seen in the following image (top left) alongside some more familiar, human-readable representations of the same terrain that you might see on maps. Shaded relief is a useful trick for representing terrain in 2D where the terrain appears to be lit from one side.

Read More »

A Data Application to Foretell the Next Silicon Valley?

Can we predict what the next hub of tech entrepreneurship will be? Could we pinpoint where the next real estate boom will be and invest there? Thanks to advances in machine learning and easier access to public data through Open Data initiatives, we can now explore these types of questions.

Read More »

Finding Abstractions that Give Data Applications 'Flight'

Continuing with our recent theme of abstraction in data applications, Dave King gave a talk last month explaining his design principles for "Making Code Sing: Finding the Right Abstractions." Nailing the best abstractions is a quintessential software challenge. We strive for generality, flexibility, and reuse, but we are often forced to compromise in order to get the details right for one particular use case. We end up with projects that we know have amazing potential for use in other applications but are too hardcoded to make repurposing easy. It’s frustrating to see the possibilities locked away, just out of reach.

Read More »

Epiphanies on Abstraction, Modularity, and Being Combinatorial

Six months ago I didn't understand the concept of abstraction. Now it comes up almost daily. It’s foundational to my thinking on everything from software to entrepreneurship. I can’t believe how simple it seems. When I finally grokked abstraction, it felt like my first taste of basic economics. Given a new framework, something that had always been there, intuited but blurry, came into focus.

Read More »

Making Service-Oriented Architecture Serve Data Applications

Bloor Group CEO Eric Kavanagh chatted with David King, CEO and founder of Exaptive recently. Their discussion looked at the ways in which service-oriented architecture (SOA) has and has not fulfilled it's promise, especially as it applies to working with data. Take a listen or read the transcript. 

Eric Kavanagh:  Ladies and gentlemen, hello and welcome back once again to Inside Analysis. My name is Eric Kavanagh. I’ll be your host for today’s conversation with David King. He is founder and chief executive officer of a very cool company called Exaptive. David, welcome to the show.

Dave King: Thanks for having me.

Eric: Sure thing. First of all, I’d like to just throw out a couple quick thoughts to frame the discussion here. I’m familiar with what you are doing at Exaptive, and I think it’s absolutely fascinating. In this world of enterprise software, we have these huge organizations, these very large companies, IBM, SAP, Oracle and SAS, and they’re just obviously prodigious companies building enterprise software. There’s a lot of great stuff that’s come out of that. No doubt about it, but of course, there are some pretty significant constraints. One is cost. A lot of that stuff is pretty expensive, but there are other  walls that have been built up. Some are virtual, some are metaphorical, and they make the whole process of really digging into data and analyzing data somewhat cumbersome I think.

Read More »

A Novice Coder, a Finance Data Application, and the Value of Rapid Prototyping

I like to build things. I like analysis. I like programming. Interestingly, you often need to reverse that order before you’re in a position to build an application for analyzing something. You need programming knowledge to turn the analysis into a “thing.” The problem is, while I like programming, I’m still new to it. I mean, I’m Codecademy good, but that doesn’t translate into a user facing application leveraging Python, Javascript, and D3. So, when I recently sat down to build a minimally viable data application for looking at airline stocks, I wondered how long it might take to get to viable and, frankly, feared how minimal it might be.

Read More »

How a Data Scientist Built a Web-Based Data Application

I’m an algorithms guy. I love exploring data sets, building cool models, and finding interesting patterns that are hidden in that data. Once I have a model, then of course I want a great interactive, visual way to communicate it to anyone that will listen. When it comes to interactive visuals there is nothing better than JavaScript’s D3. It’s smooth and beautiful.

But like I said, I’m an algorithms guy. Those machine learning models I’ve tuned are in Python and R. And I don’t want to spend all my time trying to glue them together with web code that I don't understand very well and I’m not terribly interested in.

Read More »

Topic Modeling the State of the Union: TV and Partisanship

Do you feel like partisanship is running amok? It’s not your imagination. As an example, the modern State of the Union has become hyperpartisan, and topic modeling quantifies that effect. 

Topic modeling finds broad topics that occur in a body of text. Those topics are characterized by key terms that have some relationship to each other.  Here are the four dominant topic groups found in State of the Union addresses since 1945.

Read More »

How to Tell an Interesting Data Story

The Laffer Curve. Anyone know what this says? It says that at this point on the revenue curve, you will get exactly the same amount of revenue as at this point. This is very controversial. Does anyone know what Vice President Bush called this in 1980? Anyone? …Bueller?... Bueller?... Bueller?

Data stories, believe it or not, can be gripping. You worked on the project because there was some urgency to it. The story is interesting because of that urgency and how you dealt with it.

In prior posts on communicating about data, I’ve introduced what it means to use ‘story’ to connect with your audience and how to structure a post to catch a non-captive audience's attention. In this post, learn what kind of content makes a data story a pleasure to experience.

Read More »

Affecting Change Using Social Influence Mapping

If you've ever tried to get a company to adopt new software you know how challenging it can be. Despite what seem to you like obvious benefits and your relentless communication, people selectively ignore or, worse, revolt against the change. Change efforts will even stumble in the face of this wisdom of the ages:

Read More »

How I Made a Neural Network Web Application in an Hour

Computer vision is an exciting and quickly growing set of data science technologies. It has a broad range of applications from industrial quality control to disease diagnosis. I have dabbled with a few different technologies that fall under this umbrella before, and I decided that it would be a worthwhile endeavor to rapid prototype an image recognition web application that used a neural network.

Read More »

Text Analysis with R: Does POTUS Write the State of the Union or Vice Versa?

In this post, I apply text clustering techniques – hierarchical clustering, K-Means, and Principal Components Analysis – to every presidential state of the union address from Truman to Obama. I used R for the setup, the clustering, and the data vis.

It turns out that the state of the union writes the State of the Union more than the president does. The words used in the addresses appear linked to the era more than to an individual president or his party affiliation. However, there is one major exception in President George W. Bush, whose style and content marks a sharp departure from both his predecessors and contemporaries. You can see the R scripts and more technical detail on the process here. The State of the Union addresses up to 2007 are available here and the rest you can get here.

Read More »

Einstellung Effect: What You Already Know Can Hurt You

The Einstellung effect is a psychological phenomenon that changes the way we all come to solutions and impedes innovation.

(Image Source: http://victoriousvocabulary.tumblr.com/post/15446374280/einstellung-noun-the-einstellung-effect-is-the)

Every day we solve problems - from choosing the quickest way to work, to how we’re going to fix a problem for that one client. How do we know if our solutions are any good? What if there is a much better solution that we haven’t thought of yet?

Read More »

Communicating Data Science: How to Captivate a Noncaptive Audience

When communicating about your latest data science project, whether verbally or in writing, your audience often needs to know the takeaway right away, or you’ll lose their attention. This is especially the case if your audience includes colleagues, conference attendees, or readers from outside your field. In an earlier post on communicating data science, I dove into how the elements of story can hold your audience’s attention through a dense presentation. This post introduces (and applies) some tried and true approaches for introducing the end of your story at the beginning. You’ll capture the attention of those for whom your point is valuable and have their attention for your story, and the rest of the audience doesn’t matter.

Read More »

Data Science Wanderlust: Analyzing Global Health with Protein Sequences

Fifteen years ago, I had the unique opportunity to go on Semester at Sea, an around-the-world trip on a converted cruise ship that combined college coursework stops at nine countries on four continents. This once in a lifetime trip instilled in me a strong sense of wanderlust and a deep desire to give back to the global community.

Every Journey Begins with a Single Step

Fast-forward to a few months ago, when I joined Exaptive on an exciting new project. A large NGO enlisted us to analyze a massive set of historical data for countries. The goal: to develop a better, more granular means of grouping countries than the outdated and crude approach of "developed" and "developing." This large, complex, messy dataset and thorny problem were a great fit for my background in artificial intelligence and data science.

Read More »

Communicating Data Science with 'Story'

Getting your audience’s attention, keeping it, and persuading listeners of your point are all hard to do in a world where most listeners start out thinking, and feeling, “I’ve got my own scheisse to do.” John Weathington’s recent post in Tech Republic, “Be the Hemingway of Data Science Storytelling,” makes the point that presenting data, which can be dry, is more effective if it incorporates elements of story – a protagonist, a journey with challenges, and a conclusion. Jeff Leek’s “The Elements of Data Analytic Style” has a chapter about presenting data that emphasizes story as the method for communicating results.

Great points. Essential.

But how literally should “story” be taken? Story, often romanticized as an abstract concept by omitting an article in front of it, can be an enchanting idea.

Read More »

The True Meaning of Catalyst, Crescendo, and Adaptation

People sometimes ask me what our company’s name means and why we chose it. The explanation often leads to discussions about similar but different terms. So I thought I’d use this blog post to explain, hopefully illuminate and while I’m at it, to correct some usage that’s bugged me for some time. Actually, let’s start right there.

I'm not sure where accuracy becomes pedantry, but there are two words - catalyst and crescendo - that instantly make my ears prick up when I hear them, only because I've heard them used incorrectly for long. One is from science and one from music, two things I tend to obsess about.

Read More »

Embracing the Hairball

One of the perennial challenges in visualizing complex networks is dealing with hairballs: how do you draw a network that is so large and densely interconnected that any full rendering of it tends to turn into an inscrutable mess? There are various approaches to addressing this problem: BioFabric, Hive Plots, and many others. Most involve very different visual abstractions for the network.

There is something compelling, however, in seeing the full, messy complexity of a network laid out in one image. Many of the alternate approaches have the disadvantage of being less intuitive. Most people are accustomed to inferring network structure from a collection of dots and lines; not so much from a matrix representation. I wondered if there wasn't a way to retain the immediacy and intuitiveness of, say, a force-directed layout, while somehow ordering it and stretching it out in a way that would give the important elements room to breathe. In this blog post I will describe an effort to find this middle ground.

Read More »

Cowboys and Inventors: The Myth of the Lone Genius

I recently moved from Boston to Oklahoma City. My wife got offered a tenure-track position at the University of Oklahoma, which was too good an opportunity for her career for us to pass up. Prior to the move, I had done a lot of traveling in the US, but almost exclusively on the coasts, so I didn't know what living in the southern Midwest would bring, and I was a bit trepidatious. It has turned out to be a fantastic move. There is a thriving high-tech startup culture here. I've been able to hire some great talent out of the University, and we're now planning to build up a big Exaptive home office here. Even more important, I was delighted to find a state that was extremely focused on fostering creativity and innovation. In fact, the World Creativity Forum is being hosted here this week, and I was asked to give a talk about innovation. As I thought about what I wanted to say, I found myself thinking about . . . cowboys.

Read More »

The Data Scientific Method

The Oxford English Dictionary defines the scientific method as "a method or procedure that has characterized natural science since the 17th century, consisting in systematic observation, measurement, and experiment, and the formulation, testing, and modification of hypotheses." With more scientists today than ever, the scientific method is alive and well, and generating more data than ever. This explosion of data has brought about the field of data science and an associated plethora of analytics tools. Controversially, some have claimed, such as in this Wired magazine article, that data science is so powerful that it has made the scientific method obsolete. Google's founding philosophy is that “we don't know why this page is better than that one. If the statistics of incoming links say it is, that's good enough.” The implication is that with enough data, people will no longer need to know why something happens, it just does, and that’s good enough. Is it, really?

Read More »

We work on Technology. Then it works on us.

I think I was 10 years old when my dad brought home our first microwave oven. It was an imposing black box that weighed a ton and had scary warning labels that mentioned radiation. The only time I had ever heard mention of radiation before was in regard to the atom bomb. We felt like we were supposed to run for cover whenever we turned it on, but, like everyone else I knew who had one, we did just the opposite. We huddled around it. We brought our noses right up to the translucent window, and watched, mesmerized in wonder, as the food inside got zapped by mysterious, limitless, invisible energy. When the timer beeped, and the door opened to reveal a steaming bowl of soup that had been cold only a minute ago, it seemed like a miracle. I remember those early days with the microwave vividly – experimenting with eggs, and chocolate syrup, and the off-limits gold-rimmed fine china that would send off an awe-inspiring barrage of orange sparks after just 15 seconds. Just 15 seconds! 15! I think that was the most important thing of all about the microwave oven – not what it did to my food, but what it did to my sense of time.

Read More »

Subscribe to Email Updates

Privacy Policy
309 NW 13th St, Oklahoma City, OK 73103 | 888.514.8982