Batch processing in mule is a vast and quite difficult topic to understand. At times, it may be very slow, to improve performance a sound understanding of batch processing is required. This video takes a deep dive into batch processing and explains most of the concepts at length.
Note: At 12:01 I said it's 3x the original file, instead, it's 5x the original file. Sometimes my math doesn't work well :D
Good to know concepts:
Optimizing Max Concurrency - Make sure you watch the parallel for each loop to understand how to optimize the max concurrency.
🔗 [ Ссылка ]
Streaming: - Useful to understand the streaming model in Batch Aggregator which supports the forward-only iterator
🔗 [ Ссылка ]
Batch job Part 1:
🔗 [ Ссылка ]
⏱ Video Timestamps
==========================
0:00 Start
0:32 Use case introduction
3:28 Exploring the persistent queues create by the batch job
4:45 Exploring object store, that has batch instance details
7:50 Simulating crash scenario and observing the recovery
12:25 Playing with batch block size and gaining improvements
17:35 Handling errors using accept policy and understanding max failed records parameter
28:25 Checking the object store for failure instances
41:00 Fixed-size batch aggregator
43:44 Streaming aggregator
46:38 Realizing the forward-only iterator in streaming aggregator
52:25 Variables behavior in batch job
01:05:33 Choosing between batch/for-each/parallel for-each
📌 Related Links
==========================
🔗 Cache in Mule 4: [ Ссылка ]
🔗 Transactions in Mule 4: [ Ссылка ]
🔗 Classloading Isolation in Mule 4: [ Ссылка ]
🔗 API Design Best Practices: [ Ссылка ]
🔗 Custom Policy: [ Ссылка ]
🔗 API Gateway and Autodiscovery: [ Ссылка ]
🔗 Global Error Handler: [ Ссылка ]
🎬 Popular Mule 4 Playlists
==========================
💥 Advanced Concepts in Mule: [ Ссылка ]
💥 Mule 4 Custom Connectors: [ Ссылка ]
💥 Dataweave Series: [ Ссылка ]
Let's connect:
=========================
💥 Twitter: [ Ссылка ]
Ещё видео!