Data modeling is a process of organizing and presenting information to highlight the relationships between inputs and the structure of the data. Presentation takes many forms – physical, logical, conceptual, etc. – and data is modeled to further specific organizational goals.
This may seem like a new and advanced concept, but data modeling has been around for centuries; even millennia. The most basic graphs and charts are examples of data modeling. Organizations of all types have historically sought ways to condense and communicate expansive ideas in digestible ways.
In order to help you understand where data modeling has come from and where it is heading, consider this brief history highlighting key developments and advances:
1960s
This is when the modern understanding of data modeling was first developed, largely in response the growth of “management information systems.” Before this, companies simply did not store that much data, particularly in electronic formats.
This period produced three critical data models: the hierarchical data model, the network data model, and the relational data model. The differences between each are complex, but for the sake of brevity, each model follows a unique structure and organizing principle.
Companies like IBM and General Electrics were key to the development of data modeling in the 60s. Many programming languages like Java, Eifel, C++, and Smalltalk were developed around this time in response to growing amounts of data.
1970s
This was the decade when data modeling evolved from being a theoretical concept into a practical tool accessible to a wide range of organizations. Edgar F. Codd, inventor of the relational data model, proposed a concept in which data is modeled using only columns and rows.
Unlike other models of the time, it was not necessary to use an algorithm to access the data. Instead, users simply plugged in a file name, which caused productivity to blossom. This concept was instrumental to the creation of SQL by IBM.
1980s
A concept known as the Natural Language Information Analysis Method was developed in the late 70s. In the early 80s the name was changed to object role modeling, which represented the next wave in data modeling. In this method data and procedures are stored separately, which at the time was a radical idea.
The 1980s saw the hierarchical data model fall out of favor and the relational data model reach a position of prominence. These models support query optimization, a technology that was becoming less expensive and more important towards the close of the decade.
1990s – Present
NoSQL was first developed in 1998 and quickly became a leading data model. The database is relational and open source but does not reveal the SQL connections like previous models. Later, the relational aspects were dropped, making it easier for users to incorporate “relation-less data” and for scaling up to handle huge volumes of data. NoSQL also empowers users to apply several different data modeling methods, depending on the data they are looking at or the insights they are searching for.
The evolution of data modeling has been brisking thus far, but we are arguably entering the golden age. The size of today’s data sets is larger than ever and growing at a breakneck pace. Congruently, the quality of today’s analytics, modeling, and visualization tools is far superior while also being accessible and affordable. That means data modeling reveals more while requiring less input and expertise from the user.
Data modeling has always given companies an advantage. Soon, however, it will be considered an indispensable business tool. Organizations behind the curve should begin planning for the future… and fast!



