For modern, high performance web applications we need low latency and Couchbase excels in that. To maintain the lowest possible latency even during node failure, we need to achieve 100% resident ratio for our high performance buckets. This means that Couchbase serves all your data from RAM, even the least frequently accessed ones, disk is used for persistence only. It turns out that in this condition your usable RAM is lot less, 2 thirds of your allocated quota.
Python is an excellent general-purpose, high-level programming language but it suffers from something that affected my success, my motivation initially and it’s the “free IDE support”. Yes, we have IDLE, Ninja-IDE, PyDev and perhaps few others but I found them (those days) not so “intuitive”, clumsy and I needed cross platform support too. Investing in advance made no sense so I started my python journey with Eclipse-PyDev combo and have been using them ever since.
Last year I heard about the “DOOMDAY SALE” and although my PyDev setup was sufficient, I purchased PyChram but never actually managed to convert my daily routine to it due to lack of time, technical/design differences that affected my productivity and at last but least a bit of fear about vendor lock in.
What an excellent news it was when I received my newsletter from JetBrains last week that PyCharm 3 was released and available in 2 editions:
Note: the behavior/technique explained here only true up to a certain size, aka vacuum is only feasible for smaller databases. For large databases (10G+ per file) it’s much more efficient to fail over the node then add it back to the cluster followed by rebalance.
Couchbase 1.8 supports two types of buckets but the “Memcached” bucket is limited, does not support persistence, failover or rebalance so this article is about the “Couchbase” bucket type and its maintenance.
We tend to forget the fact that this bucket is persisted so every single key is saved to disk. This means you have a copy in memory (assume your resident ratio is 100%) and on disk and depending on your cluster setup you will likely to have at least another copy in another node’s memory and its disk. (4 copies altogether)
With the added metadata overhead it’s fair to say that you actually need more disk space on each node than memory to be able to fully utilise your node’s memory and you have to consider this when you size your hardware. Couchbase 2.x requires even more disk-space (2 x your RAM) per node due to the JSON indexes and changed persistence layer.
It’s been a long a long time coming, hard work has finally paid off and the last 7 months feels like just only few weeks. Couchbase is now our primary NoSQL (key-value) store for production and we are impressed with the results. This article is about our hands-on experience, benchmarking results and its associated challenges.
We work in the online advertising market and for today’s internet user speed is everything hence latency is paramount in our application design thus, we needed something fast to store various user information including targeting data. Few years ago the choice was Voldemort for its latency and speed, but unfortunately the product was not only vulnerable to cluster changes and disasters but also was featuring a small user group so support was difficult. “Memcached” always looked promising but the lack of clustering and disk persistence was too “expensive” for our production suite.
Then Couchbase (Membase) came along which was pretty new on the market, was going through couple of re-brandings in a short period but it used “memcached” as a backend handler along with seamless clustering, auto-recovery and disk persistence. Sounds like a dream? Well it was but we had to wake up quickly in the middle of our migration because it was just not going the way we wanted to.
Welcome to my site. This is my first blog entry and I am delighted to share that my new online diary is up and running. I shall be posting (hopefully) interesting topics about cutting edge, Cloud, DevOps, Data Center and all sorts of cool technology bits I find worthwhile to share.