By Eric Mersch
Moore’s Law vs. Jevons’ Paradox, The AI Margin Reality
Moore’s Law says that computing prices will continue to drop. Jevons’ Paradox states that overall usage will explode. In short, don’t count on Moore’s Law to grow your gross margins.
Moore’s Law: Costs Are Falling Fast, but that’s Only Half the Story
Moore’s Law is named for Intel co-founder Gordon Moore (1929-2023), who observed that computer chip efficiency doubled every two years. 
When he made the observation in 1965, the number of transistors on an integrated circuit (computer chip) doubled approximately every two years, leading to a similar increase in computing power and a decrease in cost per transistor.
Moore’s Law has been widely accepted for nearly 50 years, with transistor counts roughly doubling every 18–24 months. After 2015, the law broke down as chips neared the physical limit on transistor size, which economically constrained further efficiency gains.
We are now seeing Moore’s Law dynamics in computing power. In late 2022, GPT-4 model performance cost $20 per million tokens. Equivalent performance today costs only $0.40 per million tokens. This trend is largely due to improvements in hardware efficiency. In the 2023 / 2024 timeframe, the Nvidia H100 cloud GPU cost $8 to $10 per compute hour. Today, that price is between $2.85 and $3.50 per compute hour.
The decline in computing costs is important to AI-native and AI-first companies because running AI tools is expensive. Loveable and Cursor reportedly have gross margins around 20%. AI-native business application companies generally operate at 50% gross margins, well below the SaaS company median of 78%. Companies may be counting on Moore’s Law to drive higher margins.
Jevons’ Paradox: Lower Costs Drive Higher Usage, Not Savings
In 1865, William Stanley Jevons (1835-1882) observed that improved steam engine efficiency led to more coal consumption, not less. As Mr. Jevons wrote,
“It is wholly a confusion of ideas to suppose that the economical use of fuel is equivalent to diminished consumption.” – The Coal Question (1865)
Jevons examined data showing a 50%-75% increase in energy production from 1770 to 1865 and a 15x increase in coal consumption over the sameperiod. The data relationship implies that a 1.0 unit decrease in cost increases consumption by 2.5 units. This ratio is referred to by economists as elasticity; it’s a ratio with no units. Thus, the elasticity is 2.5, illustrating Jevons’ Paradox: an elasticity above 1 means that lower unit cost drives higher total consumption.
Jevons’ Paradox implies that, despite decreases in unit costs as defined by Moore’s Law, customer usage will increase far more, making it unlikely that companies will see increases in gross margins.
AI Usage Growth Outpaces Cost Decline
In its December 2025 The State of Enterprise AI report, OpenAI stated that API reasoning token consumption per organization increased by ~320x year over year. Comparing increases in consumption to price declines will give us the AI industry’s elasticity.
The price declines are harder to assess because there are so many computing options. So, we will use a simplified analysis to compare the input costs of the premium and low-cost options.
We start with the GPT-4 pricing at $30 per million tokens. Today, companies can choose GPT-4.1, which is considered premium, for only $2.00 per million tokens, a 93% price decrease. The low-cost option is GPT-40 at $0.15 per million tokens, a 99.5% decrease.
Comparing the 320x increase in consumption to the steep decline in compute costs yields implied elasticities of 2.13 for the premium option and 1.09 for the low-cost option. Again, any number above 1.0 indicates increased consumption due to a decline in unit cost.
Plan for Rising Spend, Not Expanding Margins
Foundational model compute prices fall, adoption will explode, and total AI spend will continue to depress gross margins.
Financial leaders should plan accordingly:
- Budget for growth in total AI spend, not savings from unit cost declines
Assume usage expands faster than pricing improves. - Shift focus from cost per unit to cost per outcome
Measure ROI at the workflow or revenue level, not per token or compute hour. - Implement usage controls early
Without guardrails, consumption will scale quickly and unpredictably. - Revisit pricing models
If your product usage scales with compute, ensure pricing captures that value, or margins will compress. - Reset margin expectations for AI-native businesses
AI-driven products may not achieve traditional SaaS margin profiles without structural changes.
Bottom line: AI efficiency gains increase demand faster than they reduce cost. If you don’t actively manage consumption and pricing, margin compression is the default outcome.
FLG Partners works with CEOs and Boards to model AI-driven cost structures, implement usage controls, and align pricing to protect margins. Connect with our team to assess your exposure.
