RBI is correctly taking baby steps around fintech regulation

This post is a response to the YourStory article – Is RBI over-regulating by allotting online platforms additional responsibilities?

The gist of the article is that the RBI is overreaching by imposing capital and regulatory requirements to a nascent and developing P2P lending industry. Since my startup is related to the same space, I have some perspectives on the same.

Harini makes a good point that it is generally a good idea to not overregulate certain segments. For example, the Monetary Authority of Singapore not only provides financial support to fintech startups, but it has also committed to not interfere until startups reach a certain size.

However, I also feel that RBI is not being unfair when it is imposing certain regulations on this sector. For example, requirement for P2P lending startups to be NBFCs. It is easy to call these requirements “absurd“, but if we look at the worldwide standard around lending platforms – there is precedent to imposition of volume based financial resources requirement. For example the UK based FCA imposed the following constraints for any crowdfunding platform (including P2P)

  • 0.2% of the first 50 million pounds of total loans issued and outstanding;
  • 0.15% of the next 200 million pounds of total loans issued and outstanding;
  • 0.1% of the next 250 million pounds of total loans issued and outstanding; and
  • 0.05% an any remaining balance of loans issued and outstanding.

Interestingly, these constraints came as a consequence of several rounds of back and forth between startups and the FCA. Now, the NBFC rules in India impose a static 2 crore holding requirement.. which I would argue is much more lenient than the volume based approach of the UK government. The financial watchdogs of the UK have a much more stringent  corporate governance code than in India. Perhaps most people are not aware that NBFCs are still available for sale in India for black money (just like you would sell real estate) as a way to escape scrutiny by the people who eventually end up controlling the NBFC.

The objection to the “no assured returns” clause is a little puzzling to me – the RBI has rightfully mandated that just like stocks or mutual funds, nobody should claim a certain assured return. The reason that P2P platforms may stoop to promise assured returns because of investor pressure or the need to make quarterly numbers. It would not be surprising for a platform to run TV ads asking to “give a loan on Platform-Z and get guaranteed interest rates of 100%“. It is precisely the reason that ULIPs were regulated – because their promise of returns never accounted for “management fees”.

Now both the above points, combine to form a logic to see why the third point makes sense: The objection to word “expansion” – the logic is exactly what the FCA in Britain put across. That, you need to maintain a certain leverage ratio for the cash on hand and cash lent out. This is direct consequence to the behavior in China where uncontrolled expansion and leverage led to P2P platforms wiping out billions of dollars of capital. The way this worked was very simple – it was effectively Ponzi borrowing because of “assured returns”. Ezubao promised unrealistically high “guaranteed returns” (seven times the bank rate). To fund these returns which it had to pay its early investors, it went and promised even higher returns to new investors… that it used to pay back older investors. This is why the RBI is rightfully mandating a leverage ratio to prevent Ponzi mechanisms.

As far as clarifications on related party transactions are concerned – these are not banned by the government. The govt only says that related party transactions must not be at terms that are not normally offered. What everyone may not be aware of is that, by making the NBFC requirement as mandatory, the RBI has already taken care of that – via notification RBI/2013 – 14/57. This notification makes it amply clear that Net Owned Funds of NBFC are reduced by giving them out to

  • subsidiaries
  • companies in same group
  • other NBFC

An online platform does not evolve transparency in a vacuum – lending companies are risk centres and the central bank has a duty to not let one become a chain reaction meltdown. These regulations make a whole lot of sense for the overall health of the new age lending ecosystem in India and will catastrophic events like the Ezubao event.

What I do support Harini over is that the govt of India needs to take the same approach as the Monetary Authority of Singapore – treat fintech companies as a sort of “SEZ” and let them grow unhindered until they reach a scale when it becomes mandatory to regulate them.

And regulate them we should.

Suspending systemd/upowerd/logind on low battery (Fedora or Ubuntu)

On older Linux OS versions, the system used to automatically suspend on low battery. Then this was managed by gsettings using

gsettings set org.gnome.settings-daemon.plugins.power critical-battery-action 'suspend'

But there have been some breaking changes (for the good) that has taken all of this out of any obvious control of the user. If you open /etc/UPower/UPower.conf , you will see that the only option CriticalPowerAction=HybridSleep . So Hybrid-Sleep/Hibernate/Shutdown are the only options on low power. Testing out hybrid-sleep (using sudo /usr/lib/systemd/systemd-sleep hybrid-sleep) does not do anything… which is unsurprising.

So what you need to do is switch HybridSleep to actually suspend. The way to do it is

Create a /etc/systemd/sleep.conf

# And finally settings for hybrid-sleep -action
HybridSleepMode=suspend platform

# Setting for suspend -action
SuspendState=mem disk #freeze

verify by running sudo /usr/lib/systemd/systemd-sleep hybrid-sleep




The power of Digital India and the futility of our government

There’s been a few reddit threads started about how the #DigitalIndia campaign is a stupid exercise and our funds and efforts should be better utilized on building basic infrastructure for farmers. It is a classic comment that gets repeated again and again. It is fairly similar to an argument in a hospital – should you do the surgery yourself or spend a lot of extra time+effort+money training a new intern on how to perform surgery. Do note that it will take many years for the intern to qualify to do a surgery.

In 2002, when I graduated from IIT Bombay, I was part of a small project called aAquA headed by Prof Krithi Ramamritham. I will have to confess that I did no work, and was more interested in getting drunk – but aAquA turned into something very powerful. It still lives on today at https://aaqua.persistent.co.in/aaqua/forum/index and is run by Persistent Systems.

aAquA stood for Almost All Questions Answered. Think of it as Quora + Wikipedia for farmers.  The interesting point was that this effort predates the mobile phone revolution in India or even the internet revolution. It predates Flipkart or any ecommerce in India. Even more flabbergasting is that aAquA was contemporary with television based content for farmers (e.g. Krishi Darshan), yet was being used by farmers who walked all the way to telephone booth to access the internet. What did help a lot was that it was multi-lingual from the start (e.g. here) which was very important to reach out to that particular demographic.

Imagine what you could do with a mobile aAquA !

DigitalIndia is a fair goal, but unfortunately the goal is badly planned and is doomed to failure. Let’s take a simple example – just to get millions of farmers connected to the internet, presumably through their mobile devices, needs IPV6. In case people havent noticed, we have run out of IPV4 addresses. Yes, there are ways to get around it.. but it makes zero sense in a country of our population. We need IPV6 yesterday to enable mobile connectivity, yet we have no comprehensive strategic plan around IPV6.

The only other country with a crippling dependence on IPV6 is China, but it started a large scale push to IPV6 in 2008 (using the Olympics as an excuse).

Also remember that we are the ones who need a large number of fonts and language support for our users to use the internet. That effort has not been standardized yet – in fact the only place I can find comprehensive Indic fonts is at the Google Noto project. However, font rendering for the lesser known languages needs compex technologies like SIL Graphite which is not an important criteria for the rest of the world. Even if we solve it, we have no content on the internet that serves these languages. Take for instance the Traditional Knowledge Digital Library – the content inside is locked up in unreadable images and english language PDF. It cannot even be translated effectively.

And while these are important aspects to be taken under consideration, our government is keen on focusing on censorship and banning porn sites.

Digital India is a fair goal. I have zero confidence that this government can actually execute it.

Indian fonts and the Digital Divide

EDIT: so I got a question from Kiran on what’s so difficult about supporting Indian languages. After all most computers have Indian languages, right ?


There is NO operating system in existence that supports Indian languages in all its complexity. This is called the “ligature” problem – what we know in hindi as “maatras”. Historically, Microsoft has been the most compatible of all operating systems for Indian languages. Take for example this page on Oriya language support on Microsoft’s official site. The complexity of supporting Oriya is laid bare.

Look at the same language on Noto.

Even more interestingly, Noto screws up on Urdu – it assumes the Arabic Nashq script for Urdu. However, Urdu is an Indian language – it uses a derivative of the Persian Nastaliq fonts (a brilliant write up by Ali Eteraz here) .

This is why I filed a bug on Google 😉

So what happens in Android currently ? Android re-uses the same fonts as Linux – that is the Lohit Indic fonts. However, Noto has undergone several improvements in rendering (as can be seen in this bug). Hindi/Devanagari specific efforts in this direction have been SIL Annapurna and Microsoft’s Utsaah / Nirmala .. but no such love exists for most of the other Indian languages (especially south Indian).

Now, SIL Graphite rendering technology is far superior to Harfbuzz/Pango (which is what is used by default on Linux/android) for minority and Indic languages. But for obvious reasons, it is not a priority for the core Android project. Additionally, how do you input text on mobile phones in Oriya ? You have to hunt through hundreds of keyboard apps to find one that works. this is NOT how you do DigitalIndia.

Apparently, our govt is spending a lot of money on a customized version of Linux called BOSS. And what is so special about this great OS ? this. As most people would know, this takes a system administrator all of 2 hours to configure.

You want to bridge the digital divide ? let’s first understand the problem – language accessibility and infrastructure/IPV6 problems. Let’s talk about Facebook later.

Creating a Fedora 21 LiveUSB in Ubuntu 14.04

Sadly, Ubuntu’s startup disk creator does not allow you to create fedora images.

The officially sanctioned way to create a fedora liveusb in ubuntu is the following:

sudo aptitude install isomd5sum python-parted python-pyisomd5sum python-urlgrabber extlinux python-qt4 python-qt4-dbus tar udisks libudisks2-dev

git clone https://github.com/lmacken/liveusb-creator.git

sudo ./liveusb-creator

The correct way to update/change git submodules

  • rm -rf .git/modules/interesting_modules
  • delete the lines containing “[submodule “interesting_modules”] url=”http://something/”
  • update .gitmodules
  • run “git submodule sync”
  • run “git submodule init”
  • run “git submodule update” – at this point, a new checkout should happen.
  • new checkout will fail about a version mismatch. This is expected – the version of the submodule stored in the super-project is mismatched. “git status” should also show you “modified” in the submodule directory.
  • “cd interesting_submodule”
  • “git reset –hard HEAD”
  • enjoy !

The Android ART runtime is a Golang tunnel

I’m willing to bet that the first reason that Android switched to ART from Dalvik is the possibility of linking directly to object code from Golang.

The problem is not speeding up individual apps…the problem is that the core of Android is built in Java and therefore is accessible to another application over the same language/VM. Now the issue is how do you get golang to link to this core?

ART compiles Java down to object code and is now able to link across language boundaries – theoretically, this means that I should be able to now use Android libs from Python as well.

Or, as is more likely, golang

Setting up btcd + Go build for bitcoin

My last post was about setting up the build system for Bitcoin reference system 0.9.0.

There is an alternative architecture for Bitcoin called btcd which is developed by Conformal Systems. This is claimed to be compatible with the main blockchain (including bugs).

There is a very interesting thread about how the btcd architecture (especially the split wallet/client and daemon architecture) has been adopted in the reference client at 0.9.0

I find Go very, very pleasant and productive to work and understand and it’s package manager is absolutely brilliant.

To setup your machine to work with btcd is absolutely trivial. Remember that this should be the same on any platform (Windows, Linux and Mac) since Go is cross platform in general. Only the particular binary of Go would be different.

Download and unpack Go from http://code.google.com/p/go/downloads/list . I used http://code.google.com/p/go/downloads/detail?name=go1.2.1.linux-amd64.tar.gz&can=2&q= because I’m on Linux-64 bit but go ahead and use the one that you’re on.

Assuming you unzip it to /home/sss/Code/go, set the following variable:

export GOROOT=/home/sss/Code/go

Test your Go installation by running /home/sss/Code/go -v . Ideally this environment variable should be in your zshrc, bashrc, etc. This never changes.

Now, create a directory called /home/sss/Code/mybtcd. This is your new workspace. When you are working on a particular workspace, set the following environment variable:

export GOPATH=/home/sss/Code/mybtcd

This tells your Go package manager, the location of your top level workspace directory.

Now, to get btcd and all its dependencies as well as compile it in one shot, run:

/home/sss/Code/go/bin/go get github.com/conformal/btcd/…

After a few minutes, you should have the following directories (which complies with Go’s recommended workspace directory structure)

./bin/ -> all your binaries

./pkg/ -> all third party library dependencies

./src/ -> all btcd as well as dependent third party source.

Running your bitcoin daemon is simply ./bin/btcd (help is at ./bin/btcd –help)

To hack your code, just write your code in ./src/github.com/conformal/btcd/ and run

~/Code/go/bin/go install -v -x  github.com/conformal/btcd/

All dependencies and binaries get rebuilt. Simple.

Compiling bitcoin 0.9.0 – the transaction malleability fix

I had a bit of trouble compiling Bitcoin 0.9.0 (which contains the all important “transaction malleability” fix). So I’m posting here for the benefit of everyone. This is done on an Ubuntu 12.04 machine – which is relevant only for the system packages (if you’re on any other machine just ask around for what are the equivalent packages)

git clone https://github.com/bitcoin/bitcoin.git

cd bitcoin

git checkout v0.9.0

sudo apt-get install build-essential libboost-all-dev automake libtool autoconf #this part is dependent on Ubuntu

mkdir $PWD/release #I dont want to install bitcoin systemwide, so I make a local dir.

./autogen.sh && ./configure –with-incompatible-bdb –prefix=$PWD/release


make install


P.S. if anybody reading this is on another platform and figures out a slightly different way of doing things, please post in the comments

The intricacies of Bitcoin

What are some of the under-the-surface aspects of Bitcoin that safeguard its larger application as a viable digital currency

transaction fees

each transaction in Bitcoin is subject to transaction fees, that prevent something called Dust Spam. Where do these fees go ? Txn fees goes to whoever processes the block that contains the transaction. Additional reward for miners. Very cool !

whichever miner solves the next block gets to include a transaction for 50 BTC to themself from “coinbase.”  It turns out that  that if any of the transactions the miner included in your block had a fee attached to it, then the miner gets to include those, too.  Therefore, when a miner solves a block, it typically gets something like 50.75 BTC instead of 50.  The more transactions there were, the more fees received.

If you look at the BTC webpage description about what happens when there’s no more rewards for solving blocks, it mentions that they expect the network to be big enough by then that it will be worth solving blocks solely for the fees.  If there are 10,000 transactions per block, at 0.005 per transaction fee, that’s 50 BTC in fees.  If BTC really catches on, this is a realistic volume of transactions

transactions fees are also voluntary – a transaction fee will increase the chances that a miner will include your transaction in the block he mines. Actually, a miner  just dump the top few hundred KB of  transactions into a block, sorted by transaction fee (descending, of course). When there aren’t many transactions, maybe because of a series of block in a short amount of time, it will be confirmed anyway.

Do Transactions get lost ?

When you send a transaction, it sends a packet to all connected peers. These peers store the transaction in their in-memory pools and tell all their connections that they have a new transaction. When those connections don’t have it yet, they ask for it, and that’s how a transaction spreads over the network.

When a user restarts their client, the memory pool is wiped, and the unconfirmed/unmined transaction is deleted from that computer. But it is still available on other clients. it’s very unlikely for the transaction to be gone from the entire network.

Solving a block

a block is a list of transactions broadcast by the Bitcoin network. This system evolved because of the question “how do I build a distributed transaction network without a central authority“. What will motivate people to contribute computational and network time to the Bitcoin system ? Well, the chance to make money.

Bitcoin miners act as distributed banks – or more aptly, those irritating credit card salesman who tell you “please take this credit card“. Each of them is trying to be the eager salesman and be the first to “process” your transaction – and the way they do it is solve a puzzle. The process of “Mining” is essentially the process of competing to be the next to find the answer that “solves” the current block

The mathematical problem in each block is difficult to solve, but once a valid solution is found, it is very easy for the rest of the network to confirm that the solution is correct.

Miners are essentially putting a notary stamp on a batch of transactions. That’s all they are needed for.

But how do you prevent from having a corrupt notary? Bitcoin does this by having tens of thousands of potential notaries and one of them will happen to be the lucky one that gets to do the stamp. The lucky one is the one who happens to solve the problem. All the potential notaries try to solve the puzzle over and over, but it will take about ten minutes for one to become successful.


Essentially it is a cost function that determines how hard hashing should be so that one block is found every 10 minutes, on average.


For every 2016 blocks that are found, the timestamps of the blocks are compared to find out how much time it took to find 2016 blocks, call it T. We want 2016 blocks to take 2 weeks, so if T is different, we multiply the difficulty by (2 weeks / T) – this way, if the hashrate continues the way it was, it will now take 2 weeks to find 2016 blocks.

P.S. 2016 blocks in 14 days = 144 blocks per day. This is expected difficulty

The difficulty can increase or decrease depending on whether it took less or more than 2 weeks to find 2016 blocks. Generally, the difficulty will decrease after the network hashrate drops.

If the correction factor is greater than 4 (or less than 1/4), then 4 or 1/4 are used instead, to prevent the change to be too abrupt.

NOTE: There is a bug in the implementation, due to which the calculation is based on the time to find the last 2015 blocks rather than 2016. Fixing it would require a hard fork and is thus deferred for now.

Difficulty should settle around the 70-billion mark, assuming 300 USD/BTC, 0.08 USD/kWh, 1J/GH (with Gen2 ASICs dominating the field).

A bitcoin miner’s profit relates to the amount of hashing power they contribute to the network. Since their mining power is constant, their share of the total hashing power decreases relatively when the network’s hashing power increases. I.e.

newProfit = currentProfit * currentDiff/newDiff.

At a currentProfit of 1BTC/d and a 30% increase in difficulty, they get:

(1BTC/d)*100/(100+30)= (1BTC/d)/1.3 = 0.76923077 BTC/d

i.e. their profit decreases by ~23%.

Network Hashrate – the mathematics of difficulty.

To find a block, the hash must be less than the target. The hash is effectively a random number between 0 and 2**256-1.

hash_rate = (blocks_found/expected_blocks*difficulty * 2**32 / 600)

Block maturation

Generated coins can’t be spent until the generation transaction has 101 confirmations. Transactions that try to spend generated coins before this will be rejected

The purpose is to prevent a form of transaction reversal (most commonly associated with “double spends”) if the block is orphaned. If a block is orphaned the coinbase reward(block subsidy + tx fees). “ceases to exist”. The coins are produced from the block and when a block is orphaned it is the replacement blocks version of the coinbase tx which is considered valid by the network.

So to avoid that undesirable situation the network requires coinbase tx (rewards to miners) to “mature” or wait 100 confirmations (the client makes this 120 confirmations but only 100 is required by the protocol). If a block is orphaned before it gets 100 blocks deep into the chain, then only the miner is affected.

The referece code that checks this is

 // If prev is coinbase, check that it's matured
            if (txPrev.IsCoinBase())
                for (CBlockIndex* pindex = pindexBlock; pindex && pindexBlock->nHeight - pindex->nHeight < COINBASE_MATURITY; pindex = pindex->pprev)
                    if (pindex->nBlockPos == txindex.pos.nBlockPos && pindex->nFile == txindex.pos.nFile)
                        return error("ConnectInputs() : tried to spend coinbase at depth %d", pindexBlock->nHeight - pindex->nHeight);

Block Size Limit

Currently, the block subsidy reduces the motivation of miners to include transactions, because 99% of their income comes from the subsidy. Including zero transactions wouldn’t impact them greatly.

But when the block reward shrinks, miners may get 99% of their income from transactions. This means they will be motivated to pack as many transactions as possible into their blocks. They will receive the fees, and the rest of the bitcoin network will be burdened with the massive block they created. Without a hard limit on block size, miners will have incentive to include each and every fee-carrying transaction that will fit.

So a single entity benefits, and everyone else shoulders the cost with very little benefit.


Blockchain fork and double spending

If you understood what a block is, you can ask the question – what happens if two independent miners find an independent answer to the puzzle of “solving a block”?

Good question and this is called a blockchain fork – because now there are two ways everyone else can accept what the “current block” is. The way this is resolved is that The client accepts the ‘longest’ chain of blocks as valid. The ‘length’ of the entire block chain refers to the chain with the most combined difficulty, not the one with the most blocks. This prevents someone from forking the chain and creating a large number of low-difficulty blocks, and having it accepted by the network as ‘longest’.

Now, remember that the miners decide which chain is valid by continuing to add blocks to it. The longest block chain is viewed as the valid block chain, because the majority of the network computation is assumed not to come from malicious users.

– Race conditions

If you wallet accepts incoming peer connections, they can potentially control the information you receive (by flooding your connections). This means that it is possible to convince you about transactions that the larger network is rejecting. So, if you are a Bitcoin accepting merchant, disable your incoming connections and connect to reputed nodes to confirm transactions.

It is worth noting that a successful attack costs the attacker one block – they need to ‘sacrifice’ a block by not broadcasting it, and instead relaying it only to the attacked node.

There is a variant of this called the Finney attack which needs the collusion of a large miner – unlikely, but still possible.

– Brute force and >50% attacks

The attacker submits to the merchant/network a transaction which pays the merchant, while privately mining a blockchain fork in which a double-spending transaction is included instead. After waiting for n confirmations, the merchant sends the product. If the attacker happened to find more than n blocks at this point, he releases his fork and regains his coins; otherwise, he can try to continue extending his fork with the hope of being able to catch up with the network. If he never manages to do this, the attack fails and the payment to the merchant will go through.

NOTE: Theoretically someone with massive computational power can create start controlling this, but that is highly unlikely.