The invoice arrived like it did every month. But this time, the number stopped them cold: $47,000 in cloud egress fees for moving unprocessed IoT data from the factory floor to the data lake.
Across industries, teams building real-time data pipelines discover a brutal truth: sending raw data to the cloud is expensive. Really expensive. And it's getting worse as sensor volumes explode.
Here's the math most people miss — and the solution that changes how teams think about data movement.
The Problem No One Talks About
Cloud egress fees are the silent killer of data initiatives. When you're streaming millions of sensor readings per minute from industrial equipment, vehicles, or retail locations, you're not just paying for compute. You're paying to move all that data out of the cloud provider's network.
For a mid-sized industrial deployment:
- 10,000 sensors generating readings every 30 seconds
- Average payload: 500 bytes per reading
- Monthly data volume: ~1.3 GB/month per sensor
- At $0.09/GB egress: that's $117/month per sensor
Scale to 10,000 sensors, and you're looking at $1.17 million annually. Just$$ for moving bytes.
The kicker? Most of that data isn't useful in its raw form. It's noise — temperature fluctuations within normal ranges, redundant GPS pings, sensor drift that needs filtering before it tells you anything meaningful.
What Edge Processing Actually Means
Edge processing isn't a buzzword. It's running your data logic closer to where the data is born — on-premises, on the device, or at the network edge — before anything hits the cloud.
Instead of sending every single sensor reading upstream, you filter, aggregate, and transform at the edge. Only relevant insights or aggregated metrics travel across the wire.
The math shifts dramatically:
| Metric | Cloud-Only | Edge + Cloud |
|---|---|---|
| Raw data sent | 1.3 GB/sensor/mo | 0.01 GB/sensor/mo |
| Egress cost | $117/sensor/mo | $1.17/sensor/mo |
| Annual (10K sensors) | $1.17M | $11,700 |
| Savings | — | 99% |

That's not a typo. Local filtering and aggregation can cut your bandwidth costs by 90% or more.
A Real Example: Manufacturing Floor
Consider a concrete scenario from a manufacturing deployment:
The setup: 500 machines, each with 20 sensors reporting every second. Raw data stream: 360 million records per day.
Without edge processing:
- Daily cloud egress: ~180 GB
- Monthly cost: ~$15,000 just in bandwidth
- Plus compute costs to process all that noise
With edge processing (deployed at the edge):
- Each edge node filters: removes readings within normal ranges, aggregates 1-second data into 5-minute summaries
- Daily cloud egress: ~4 GB
- Monthly cost: ~$350
- Total savings: $14,650/month, or $175,800/year
And the processing logic at the edge is doing more than filtering — it's enriching data with local context, handling protocol conversions (OPC-UA to JSON, for example), and routing only actionable events upstream.
When Edge Makes Sense (And When It Doesn't)
Edge processing isn't universal. It shines when:
- Data volumes are massive — millions of events per day
- Latency matters — you need sub-second responses
- Bandwidth costs are painful — egress fees are a line item you want to shrink
- Connectivity is unreliable — edge nodes can buffer during outages
You might skip it if:
- Data volumes are manageable (under 10 GB/day)
- All processing happens in a single cloud region anyway
- Your team has no on-prem or edge infrastructure capacity
The Bigger Picture
Cutting egress costs is the visible win. But the ripple effects matter too:
- Faster insights: Processing at the edge reduces round-trip latency from seconds to milliseconds
- Better reliability: Local processing keeps working when connectivity drops
- Compliance wins: Sensitive data stays on-prem, only anonymized insights go to the cloud
Your Turn
If you're spending more than $5,000/month on cloud egress for streaming data, edge processing probably makes sense. Run the numbers on your own setup — filter for the data that actually matters, aggregate what you can, and send only the rest.
For teams ready to make the shift, modern edge orchestration platforms such as layline.io can handle edge deployment as a first-class concern: container-native, runs anywhere (industrial PC, gateway, Kubernetes cluster), and provides the same visual workflow for edge and cloud processing.
CTA: Want help modeling edge savings? Run a quick audit of current data volumes and egress costs, then compare it against what edge processing could look like. Book a technical chat — a solutions engineer can walk through the numbers.



