You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session. Dismiss alert
(かきかけ)つい先日、surgemqの速さについての記事「SurgeMQ: MQTT Message Queue @ 750,000 MPS」が製作者本人により書かれ自分の周りのTLではにぎわいを見せました。 現時点ではまだまだ未実装な点が多いsurgemqですがgoでこの速度を達成した、ということは驚異に値します。 nsqdが約10万msg/sec、私のgoのmqttサーバーが4万msg/sec、ということで普通にgoでああいったTCPサーバーを書くと何もしないとSingle Coreで1〜2万に行くのがやっとこなはずなので凄さがわかると思います。 LMAX Disruptor style RingBuffer surgemqの実装周りの参考にしたのがLMAX DisruptorのRing Bufferだそうです。 Disruptorは高速な汎用QueueをRing Bufferを使って
Recently a respected member of the Apache community tried Log4j 2 and wrote on Twitter: @TheASF #log4j2 rocks big times! Performance is close to insane ^^ http://t.co/04K6F4Xkaa — Mark Struberg (@struberg) May 7, 2013 (Quote from Mark Struberg: @TheASF #log4j2 rocks big times! Performance is close to insane ^^ http://logging.apache.org/log4j/2.x/ ) It happened shortly after Remko Popma contributed
When you are optimizing the performance of your Storm topologies it helps to understand how Storm’s internal message queues are configured and put to use. In this short article I will explain and illustrate how Storm version 0.8/0.9 implements the intra-worker communication that happens within a worker process and its associated executor threads. Internal messaging within Storm worker processes Il
Java Core | Understanding the Disruptor: a Beginner's Guide to Hardcore Concurrency | Trisha Gee & Mike Barker 2011-11-02 | 05:45 PM - 06:35 PM | Victoria The Disruptor is new open-source concurrency framework, designed as a high performance mechanism for inter-thread messaging. It was developed at LMAX as part of our efforts to build the world's fastest financial exchange. Using the Disruptor as
Geekswithblogs.net, founded in 2003, had a very long run. The future of the site is now back in the hands of the original founder, Jeff Julian, and that is why you are here at Julian Farms or my consulting firm, Squared Digital. What’s next? Glad you asked. I still believe there is a place for blogs in this digital era of the 2020s, but I don’t believe I have a full picture of what it should look
A common pattern in real-time data workflows is performing rolling counts of incoming data points, also known as sliding window analysis. A typical use case for rolling counts is identifying trending topics in a user community – such as on Twitter – where a topic is considered trending when it has been among the top N topics in a given window of time. In this article I will describe how to impleme
How often have we all heard that “batching” will increase latency? As someone with a passion for low-latency systems this surprises me. In my experience when batching is done correctly, not only does it increase throughput, it can also reduce average latency and keep it consistent. Well then, how can batching magically reduce latency? It comes down to what algorithm and data structures are empl
When trying to build a highly scalable system the single biggest limitation on scalability is having multiple writers contend for any item of data or resource. Sure, algorithms can be bad, but let’s assume they have a reasonable Big O notation so we'll focus on the scalability limitations of the systems design. I keep seeing people just accept having multiple writers as the norm. There is a lot
リリース、障害情報などのサービスのお知らせ
最新の人気エントリーの配信
処理を実行中です
j次のブックマーク
k前のブックマーク
lあとで読む
eコメント一覧を開く
oページを開く