To be able to mimic our production workload in testing, we had to come with a low cost solution to load balance HTTP traffic between few application servers. In addition to that, for the first (initial request) we required even distribution amongst the backend nodes but, subsequent requests needed to be handled by the same backend server.
I am a sysop / devops engineer, love open source and security so I tend to ignore commercial software. For password valet, I have been using KeePass for years and happy with it except a couple of things:
* written in .NET so cross platform integration has its challenges
* browser integration
Although the browser integration is reasonably good now on Windows, it’s not as “refined” as its commercial competitors such as Dashlane or 1Password. So I decided lately, to investigate these utilities to see if they can convince me to switch…
I really liked the Pylint integration in Eclipse/Pydev but I have switched to PyCharm since JetBrains released CE edition. Pycharm supports PEP8 auditing “out of the box”, but I found out lately, that it is a little “loose” on style compared to pylint. Running pylint in pycharm didn’t seem to be supported in any ways so I became curious about how I could add this support to my favourite IDE.
After some searching, I realised that, there is not much out there about this topic however I could not accept it and went after the challenge. Continue reading
We are an agile software development company and agile is great for “moving target”. We plan, work and implement changes in small batches and ongoing re-factoring is just the nature of what we do.
We recently added some functionality as well as increased traffic for one of our Java products utilising Apache Camel and ActiveMQ. The product has been in production for years now, functioning with very much zero defect rate. Not soon after deploying the new code, our monitoring system triggered alerts about unusually high TCP TIME_WAIT connections on the server where the new code was running so we began the troubleshooting process and found they were all ActiveMQ connections to our broker. Our developers immediately confirmed:
“no change on the ActiveMQ connection manager side”
Well, it turned out that it was exactly the problem. Continue reading
I am heavily into Salt infrastructure management at the moment, and wish to leverage all available (community written) states and formulas. Luckily, the SaltStack group maintains a collection of excellent formulas on their github page, and they are great source for states, ideas, best practices, etc. So I started cloning them, first the ones that I really needed. Then I realized later on, that some I may need in the near future so why not clone all of them and ensure we have a local copy of them for our development.
The pages have been updated fairly regularly lately, more and more people contributing now to the project, which is great however it started to become tedious to find new states and I needed an automated solution to keep up to date with the changes. Continue reading
Amazon VPC has been out for some time offering full control of isolated local networking in the cloud. This means that you can have your own private subnet in the cloud, have control over what private IPs your instances are going to use, change the instance type, should your resource requirements increase and so forth.
This guide is going to be technical, intended for experienced professionals where I will be discussing options and solutions to securely integrate your onsite (private) LANs with Amazon VPC. It is based on OpenVPN client running on an instance inside VPC, connecting to my remote branch firewall running pfSense 2.1.3 and OpenVPN server. The point-to-point tunnel between the client / server is 2-way, both the client and the server expose their local networks and route traffic to the other side accordingly. But first, let’s take a look at what other option we have. Continue reading
For modern, high performance web applications we need low latency and Couchbase excels in that. To maintain the lowest possible latency even during node failure, we need to achieve 100% resident ratio for our high performance buckets. This means that Couchbase serves all your data from RAM, even the least frequently accessed ones, disk is used for persistence only. It turns out that in this condition your usable RAM is lot less, 2 thirds of your allocated quota.
Python is an excellent general-purpose, high-level programming language but it suffers from something that affected my success, my motivation initially and it’s the “free IDE support”. Yes, we have IDLE, Ninja-IDE, PyDev and perhaps few others but I found them (those days) not so “intuitive”, clumsy and I needed cross platform support too. Investing in advance made no sense so I started my python journey with Eclipse-PyDev combo and have been using them ever since.
Last year I heard about the “DOOMDAY SALE” and although my PyDev setup was sufficient, I purchased PyChram but never actually managed to convert my daily routine to it due to lack of time, technical/design differences that affected my productivity and at last but least a bit of fear about vendor lock in.
What an excellent news it was when I received my newsletter from JetBrains last week that PyCharm 3 was released and available in 2 editions:
Note: the behavior/technique explained here only true up to a certain size, aka vacuum is only feasible for smaller databases. For large databases (10G+ per file) it’s much more efficient to fail over the node then add it back to the cluster followed by rebalance.
Couchbase 1.8 supports two types of buckets but the “Memcached” bucket is limited, does not support persistence, failover or rebalance so this article is about the “Couchbase” bucket type and its maintenance.
We tend to forget the fact that this bucket is persisted so every single key is saved to disk. This means you have a copy in memory (assume your resident ratio is 100%) and on disk and depending on your cluster setup you will likely to have at least another copy in another node’s memory and its disk. (4 copies altogether)
With the added metadata overhead it’s fair to say that you actually need more disk space on each node than memory to be able to fully utilise your node’s memory and you have to consider this when you size your hardware. Couchbase 2.x requires even more disk-space (2 x your RAM) per node due to the JSON indexes and changed persistence layer.
It’s been a long a long time coming, hard work has finally paid off and the last 7 months feels like just only few weeks. Couchbase is now our primary NoSQL (key-value) store for production and we are impressed with the results. This article is about our hands-on experience, benchmarking results and its associated challenges.
We work in the online advertising market and for today’s internet user speed is everything hence latency is paramount in our application design thus, we needed something fast to store various user information including targeting data. Few years ago the choice was Voldemort for its latency and speed, but unfortunately the product was not only vulnerable to cluster changes and disasters but also was featuring a small user group so support was difficult. “Memcached” always looked promising but the lack of clustering and disk persistence was too “expensive” for our production suite.
Then Couchbase (Membase) came along which was pretty new on the market, was going through couple of re-brandings in a short period but it used “memcached” as a backend handler along with seamless clustering, auto-recovery and disk persistence. Sounds like a dream? Well it was but we had to wake up quickly in the middle of our migration because it was just not going the way we wanted to.