SQL vs. NoSQL
SQL databases (also known as relational databases) were developed in the 1970s. Traditional SQL databases have strong consistency and ACID guarantees. Original, monolithic SQL databases like Oracle, PostgreSQL, SQL Server, and MySQL have remained in use due to their ability to deliver ACID transactional consistency. However, these databases tend to have rigid, complex, tabular schemas and cannot automatically distribute data and queries across multiple instances. They typically require expensive vertical scaling.
NoSQL databases were developed in the late 2000s as an alternative to SQL databases. Examples of NoSQL databases include Cassandra, MongoDB, Amazon DynamoDB, and Azure Cosmos DB. Their goal was to provide horizontal scalability without compromising on performance. However, this leads to eventual consistency instead (no ACID guarantees) and a loss of multi-key access patterns including SQL integrity/foreign key constraints and JOINS.
SQL vs. NoSQL in Application Development
Despite the introduction of NoSQL databases, most application developers have a preference for SQL databases. One reason is the inherent power of SQL as a data modeling language, since it effortlessly models relational and multi-row operations.
For example, SQL goes way beyond traditional key-value NoSQL, allowing multi-row transactions both implicitly (using secondary indexes, foreign keys, and JOIN queries), and explicitly (using the BEGIN and END TRANSACTION syntax). More importantly, developers love the ease with which they can leverage SQL to model (and store) data only once, then change queries by simply changing JOINs as and when business needs change.