Masstransit in memory3/21/2023 ![]() ![]() Interlocked.Increment( ref _deliver圜ount) Task IConsumeObserver.PreConsume(ConsumeContext context) Task IConsumeObserver.ConsumeFault(ConsumeContext context, Exception exception) => Task.CompletedTask Public async Task Wait() => await _complete.Task Public int MaxDeliver圜ount => _maxPendingDeliver圜ount Public ConsumeCountObserver( int messageCount) The existing MassTransit setup looks pretty similar to this:Ĭlass ConsumeCountObserver : IConsumeObserver For this project that means 4 messages, and as the process is IO bound, it stands to reason that we could increase that concurrency a bit. ![]() ![]() We use MassTransit when interacting with RabbitMQ as it provides us with a lot of useful features, but by default sets the amount of messages to be processed in parallel to Environment.ProcessorCount * 2. Processing this spike of messages takes ages, and while this service is only on a T2.Medium machine (2 CPUs, 4GB Memory), it only uses 5-10% CPU while processing the messages, which is clearly pretty inefficient. One of the main benefits to having this behind a queue is our usage pattern - the queue usually only has a few messages in it per second, but periodically it will get a million or so messages within 30 minutes (so from ~5 messages/second to ~560 messages/second.) We have a service which consumes messages from a RabbitMQ queue - for each message, it makes a few http calls, collates the results, does a little processing, and then pushes the results to a 3rd party api. ![]()
0 Comments
Leave a Reply.AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |