This post will serve as an introduction to the concept of OT extension. We’ll discuss why is it useful and present one of the most fundamental constructions of it, typically referred to as “IKNP”.
We’ll start with a quick reminder of what is an ObliviousTransfer.
Oblivious Transfers Reminder
ObliviousTransfer is a functionality which takes place between two parties.
 A “Sender Party”, denoted $P_S$, holding two messages $m_0,m_1$.
 A “Receiver Party”, denoted $P_R$, holding a choice bit $b$.
The protocol realizing ObliviousTransfer should satisfy three constraints:
 Correctness: $P_R$ learns $m_b$, that is either $m_0$ or $m_1$ depending on its choice bit $b$.
 Receiver Privacy: $P_S$ should learn nothing about $b$, the choice bit $P_R$.
 Sender Privacy: $P_R$ should learn nothing about $m_{1b}$, the it didn’t choose to learn.
Why OT Extension
ObliviousTransfers (OTs) are useful for a wide range of applications (e.g., for GarbledCircuits). I have also dedicated a post for showing how OTs can be constructed and why constructing them is not a trivial task. Moreover, all known constructions of OTs are based on some primitives beyond symmetriccryptography. Amongst these primitives are RSA groups and ellipticcurve groups. Such primitives are typically computationally expensive, therefore, applications that require a large quantities of OTs, even if these OTs are done in parallel, may suffer from poor performance. At this point you are probably still eager to know what an OT extensions is?, so just before defining OTextensions, I’d like to briefly introduce the notion of Hybrid Cryptosystems (to which I should probably dedicate a post someday).
Warning: If you don’t know about PublicKey Cryptography, you are encouraged to skip the following paragraph, it is given only as supplementary analogue of OTextension from the realms of EncryptionSchemes.
OTs remind me of publickeyencryption (PKE) in the following sense: Public Key encryption are a computationally heavy tool, just like known implementations of OTs. To achieve secure communication, a naive cryptographer might be tempted to use this tool to encrypt each and every message sent between two communicating parties. However, encrypting 100% of the communication using PKE may lead to poor performance and increased costs for companies with millions of clients connected to them concurrently. From an engineering perspective we would prefer using secretkey encryption to secure the communication. However, secretkey encryption, as its name suggests, requires the communicating parties to agree on a secret key, unlike publickey encryption. We would be intrested, therefore, in a solution that is efficient as secretkey encryption, but preserves the conviniece of publickey cryptosystems. The solution is to use the publickey encryption tool only to realize a secure KeyAgreement (KA) protocol so that after this KA protocol ends, the communicating parties end up sharing a secretkey that will be used to encrypt the rest of the communication efficiently.
Similarly, protocols for OTs are computationally heavy. Consider the scenario where a naive cryptographer is willing to implement some protocol (e.g. Yao’s Garbled Circuits protocol) that requires a large number of OTs (say 100,000 OTs) between two parties. That is one party, $P_A$ holds 100,000 pairs of messages $(m_0^1,m_1^1),\ldots,(m_0^{100,000},m_1^{100,000})$ while another party $P_B$ holds 100,000 choice bits $b_1,\ldots,b_{100,000}$. We want to end up with $P_B$ receiving $m_{b_i}^i$ for $i=1,\ldots,100,000$ without $P_A$ learning anything about $P_B$’s choice bits and without $P_B$ learning anything about the message she didn’t choose. The naive cryptographer, therefore, might be tempted to run the OT protocol 100,000 times. Since the OT protocols we know as of today are computationally heavy, doing so might result in poor performance. We therefore are interested in realizing these 100,000 OTs without actually running the OT protocol 100,000 times. Achieving this goal is done by employing OTExtension. In other words, OTExtension, is a method to realize a large number of OTs from a small number of invocations of the OT protocol. It is important to clarify at this early point, that our goal is to minimize the computation, rather than the communication. That is, the main bottleneck for realizing large numbers of OTs is computational rather than communicational.
Spoiler: After we resolve the computational bottleneck, communication becomes the major bottleneck. While the problem of reducing the communication cost of OTExtension has been extensively studied, we will not be focusing on these followup works today.
Problem Definition
Let’s try and define the problem a little bit more concretely. First, we distinguish between OTs and secret key primitives, also known as “Minicrypt” (e.g Pseudorandom Generators (PRGs), One Way Functions (OWFs) and SercretKey Encryption). You may ask “but why?!”, and this is a rightful question. Why do we distinguish between OTs and secret key primitives? Well, there are two reasons for doing so, a practical one and a theoretical one. The practical reason is that we simply don’t know how to build protocols for OT that rely solely on secret key primitives! Indeed,if we do know, someday, this distinction will be meaningless. The theoretical reason, however, was brought by Impagliazzo and Rudich who showed in their paper “Limits on the provable consequences of oneway permutations” from 1989 that black box reductions from OT to OWF implies $P\neq NP$. Without getting into too much detail, it means that OTs that are constructed from secret key primitives would resolve the most fundamental question in theoretical computer science. While it isn’t impossible, it might be quite challenging for the time being with our existing knowledge in complexity theory, and therefore, for the time being, our distinction makes sense.
Moving forward with our problem definition, we would like to realize the following functionality, which we call $N$OT. Where, informally speaking, two parties $P_S$ (the “sender”) and $P_R$ (the “receiver”) wish to realize $N$OTs. That is, $P_S$ holds $N$ pairs of messages: ${(m_0^1,m_1^1),\ldots,(m_0^N,m_1^N)}$ and $P_R$ holds $N$ choice bits ${b_1,\ldots,b_N}$. Our protocol for $N$OTs has to satisfy the following criteria (we mentioned at the beginning of the post for $N=1$):
 Correctness: $P_R$ learns the message it chose to learn from each pair: $m_{b_1}^1,\ldots,m_{b_N}^N$.
 Sender Privacy: $P_R$ learns nothing about the message it didn’t choose to learn from each pair: $m_{1b_1}^1,\ldots,m_{1b_N}^N$
 Receiver Privacy: $P_S$ learns nothing about the choice bits $b_1,\ldots,b_N$.
As we already mentioned, one way to realize this protocol is simply via $N$ invocations of the OT protocol for a single message, but in order to reduce the computational complexity of our protocol, we add another criterion:
 Computational Efficiency: Using only symmetrickey primitives (PRGs, OWFs, symmetric encryption) and at most $n \ll N$ invocations of the OT protocol are allowed. We will later see how exactly $n$ and $N$ are tied.
So just before seeing how exactly this problem can be solved, I’d like to discuss the historical chain of events that eventually led to the construction I’ll present later in the post.
Historical Note
As we mentioned earlier, in 1989 Impagliazzo and Rudich proved that building OTs from “blackbox” symmetric key primitives would resolve the most fundamental question in theoretical computer science. Following that work, Donald Beaver in 1991 showed that a small number of OTs can be extended into a large number of OTs, which is what we are looking for today. Right? Well, yes and no. The way Beaver’s construction work is what cryptographers call “nonblackbox”, that is, Beaver’s construction indeed use secretkey primitives (namely, a PRG) to extend a small number of OTs into a large number of OTs, but it did so in a “nonblackbox” way. The term “nonblackbox” means that Beaver did this extension while relying on the exact way the PRG is computed. To be exact, his construction doesn’t simply use PRGs as they were supposed to be used, but assumes the parties running Beaver’s construction have access to the boolean circuit computing the PRG itself. Cryptographer’s usually try to refrain from nonblackbox constructions due to various somewhat technical reasons, for example in our case since the parties running Beaver’s extension algorithm have to be aware of the boolean circuit computing the PRG, so they cannot delegate the PRG computation to another party (what cryptographers call a “PRG Oracle”), as they must compute it themselves since the extension algorithm is meddling with the actual computation the PRG circuit is doing. In any case, refraining from nonblackbox constructions also has practical reasons relating to the efficiency issues these constructions typically encumber.
So, since Beaver has outsmarted our criterion, let’s change the previous criterion a little bit:
 Computational Efficiency: Using only symmetrickey primitives (PRGs, OWFs, symmetric encryption) in a black box way and at most $n \ll N$ invocations of the OT protocol are allowed. We will later see how exactly $n$ and $N$ are tied.
The difference from the previous phrasing of the criterion is in bold, that is, we seek to use symmetric key primitives in a black box way.
Next, in 2003 Ishai, Kilian, Nissim and Petrank have published their paper (hereafter referred to as IKNP) “Extending Oblivious Transfers Efficiently” where they presented an OT extension relying on secretkey primitives (specifically, a PRG) in a blackbox way. After presenting a problem and some of the relevant history led to the solution of it, in the rest of our post we shall focus on the construction of IKNP itself. But to fully understand it there is a minor subtelty we have to discuss first regarding OTs.
Types of OTs
To make the problem of OT extension a bit “cleaner” we will present two different types of OTs:
 Chosen Message OT: This is the “classical” OT, where the input of the sender is two messages $m_0,m_1$ and the input of the receiver is a choicebit $b$. From the functionality the sender learns nothing, while the receiver learn $m_b$ (and in particular, nothing about $m_{1b}$). In short, this is the OT you should already know by now.
 Random OT (abbr. ROT): This is a bit different OT where functionality itself randomly samples an OT correlation for the parties. That is, the parties don’t have any input whatsoever and at the end of the protocol the sender learns two random messages $m_0,m_1$ and the receiver obtains a random bit $b$ and a message $m’=m_b$. In other words, the outputs are random constrained on the fact that they have to satisfy that $m’=m_b$.
This may sounds a bit counterintuitive, but actually the first functionality (Chosen Message OT) is easier to realize than Random OT. Why so? Let’s try to prove it! If we show that from RandomOT we can construct ChosenMessage OT then indeed any protocol for RandomOT implies a protocol for ChosenMessage OT and therefore the latter is an easier problem. Well, given a protocol that samples random messages for the OT we can transform it into a chosen message OT as follows (relying only on symmetric primitives).
You are encouraged to stop reading and think about it for a few minutes.
The solution goes (informally) like this: Say the sender party received random messages $r_0,r_1$ from the RandomOT functionality while the receiver received $b, r_{b}$ from the RandomOT functionality. The realize a chosen message OT where the sender party inputs $m_0,m_1$ and the receiver has a choice bit $b’$ we can do the following:

The receiver sends to the sender $d = b\oplus b’$, this is a random bit since the bit $b$ is random and the sender doesn’t know $b$. In particular the sender doesn’t learn anything either about $b$ or $b’$ from this bit.
 The sender considers $r_0,r_1$ as symmetric encryption keys (as they are random and may be chosen to be sufficiently large, they can be used as such keys). It then sends to the receiver $E_{r_0}(m_{0\oplus d}),E_{r_1}(m_{1\oplus d})$.
 The receiver knows $r_b$ opens $E_{r_b}(m_{b\oplus d})$ and learns $m_{b\oplus d}=m_{b\oplus b\oplus b’} =m_{b’}$ as requested.
In other words, the idea is to use the random messages from the RandomOT functionality as symmetric key keys and send $b\oplus b’$ to the sender so it can “change the ordering” of its messages if $b\neq b’$.
In any case, since chosenmessage OT can be built easily from RandomOT, IKNP construction will be focused on extending small number of OTs to large number of Random OTs.
IKNP Construction
IKNP focused in their paper of taking a small number ($\lambda$) of OTs, and extending them into large number ($N$) of Random OTs. Let’s see how it’s done. We denote by $\lambda$ as a security parameters, that is parameter which the higher it is being set, the more secure the protocol gets.
The Input
 The Sender has input $((m_0^1,m_1^1),\ldots,(m_0^\lambda,m_1^\lambda))$, that is $\lambda$ pairs of messages. All messages are of the same length $\lambda$.
 The Receiver has input $((b_1,m_{b_1}^1),\ldots,(b_\lambda,m_{b_\lambda}^\lambda))$, that is $\lambda$ pairs of choice bits and the chosen message itself from each pair of messages of the sender.
Extending The Input
Since we can use secretkey primitives we will use a pseudorandomgenerator ${\sf PRG}:\{0,1\}^\lambda \rightarrow \{0,1\}^N$ so that both the sender and the receiver will apply the PRG on all of their messages:
 The sender computes $M_j^i={\sf PRG}(m_j^i)$ for $j\in \{0,1\}$ and $i\in\{1,\ldots,n\}$.
 The receiver computes $M_{b_i}^i={\sf PRG}(m_{b_i}^i)$ for $i\in\{1,\ldots,n\}$.
Renaming Variables
We can think at this point as $M_j^i$ as vectors in $\mathbb{F}_2^N$, that is vectors of length $N$ where each element in the vector is either $0$ or $1$, we can also think of $b_i$’s as elements in $\mathbb{F}_2$. We use “$+$” and “$$” to denote addition/subtraction in $\mathbb{F}_2$ as well as pointwise addition of vectors in $\mathbb{F}_2^N$. The distinction between addition/subtraction is cosmetic, as in binary fields they have the exact same outcome. With this algebraic setting it holds that for any $i\in \{1,\ldots,N\}$:
\[M^i_{b_i} = M_0^i\cdot(1b_i)+M_1^i\cdot b_i\]Indeed, convince yourself the above equation is correct both when $b_i=0$ and when $b_i=1$. By moving some terms from the right hand side of the equation to the left, we get:
\[M^i_{b_i}  M_0^i = (M_1^iM_0^i)\cdot b_i\]To simplify, for each $i\in \{1,\ldots,N\}$
 The sender, $P_S$ sets $\vec{u}_i=M_1^iM_0^i$ and sets $\vec{v}_i=M_0^i$
 The receiver, $P_R$ sets $\vec{w}_i = M^i_{b_i}$.
Therefore, we get:
\[\vec{w}_i  \vec{v}_i = \vec{u}_i \cdot b_i\]In fact, it will be easier in the rest of the post to use matrix notation rather then vector notation. Therefore, it will be easier to think of three matrices $W,V,U\in \mathbb{F}_2^{\lambda \times N}$ such that the $i$th row of $W,V,U$ will be $\vec{w}_i,\vec{v}_i,\vec{u}_i$ respectively. In addition, the receiver will denote by vector $\vec{b}=(b_1,\ldots,b_\lambda)$. Also, we introduce a little bit of notation:
 For any matrix $A$ we denote by $A_{\cdot,i}$ it’s $i$th column and by $A_{j,\cdot}$ it’s $j$th row.
 For any vector $\vec{v}=(v_1,\ldots,v_n)$ we denote by $\text{diag}(\vec{v})$ the $n \times n$ diagonal matrix $V$ with elements of $\vec{v}$ in its diagon. That is:
With our new notation it holds that: $WV=U\times\text{diag}(\vec{b})$ with “$\times$” denotes matrix multiplication. Be sure you understand why the equality above really holds. It might help to write the matrices explicitly so the above equality means that:
\[\begin{pmatrix} \bigg\vert & \bigg\vert & \dots & \bigg\vert \\ \vec{w}_1 & \vec{w}_2 & \dots & \vec{w}_\lambda \\ \bigg\vert & \bigg\vert & \dots & \bigg\vert \\ \end{pmatrix}  \begin{pmatrix} \bigg\vert & \bigg\vert & \dots & \bigg\vert \\ \vec{v}_1 & \vec{v}_2 & \dots & \vec{v}_\lambda \\ \bigg\vert & \bigg\vert & \dots & \bigg\vert \\ \end{pmatrix} = \begin{pmatrix} \bigg\vert & \bigg\vert & \dots & \bigg\vert \\ \vec{u}_1 & \vec{u}_2 & \dots & \vec{u}_\lambda \\ \bigg\vert & \bigg\vert & \dots & \bigg\vert \\ \end{pmatrix} \times \begin{pmatrix} b_1 & 0 & \dots & 0\\\ 0 & b_2 & \dots & 0\\\ \vdots & \vdots & \ddots & \vdots\\\ 0 & 0 & \dots & b_\lambda \end{pmatrix}\]With $U,V$ known to the sender and $W,\vec{b}$ known to the receiver.
Derandomization
Notice that by now matrix $U$ is pseudorandom. Our goal at this point will be derandomizing the vectors $\vec{u}_i$ so that the columns matrix $U$ will all be equal to the same vector $\vec{u}^*$ known to the sender. In other words, we want the sender to send some “correction” information to the receiver, who in turn will update its vectors $\vec{w}_i$ into corrected vectors $\vec{w}_i’$ so that for all $i\in\{1,\ldots,\lambda\}$ we will have $\vec{w}_i’\vec{v}_i=\vec{u}^*\cdot b_i$. It is crucial that the receiver will not be able to learn anything about the sender’s $\vec{v}_i,\vec{u}_i$ or $\vec{u}^*$ from the correction information it received.
Without further ado, let’s see how the derandomization process goes. For each $i\in\{1,\ldots,\lambda\}$ the derandomization of $\vec{u}_i$ is done by:
 The sender computes $\bar{u}_i=\vec{u}^*  \vec{u}_i$ and sends $\bar{u}_i$ to the receiver.
 The receiver updates its $\vec{w}_i’ \leftarrow \vec{w}_i + \bar{u}_i\cdot b_i$
Notice that now the equality holds:
\[\begin{aligned} \vec{w}_i'  \vec{v}_i &= \vec{w}_i  \vec{v}_i + \bar{u}_i\cdot b_i \\ &= \vec{u}_i\cdot b_i + \bar{u}_i \cdot b_i \\ &= \vec{u}_i\cdot b_i + \vec{u}^*\cdot b_i  \vec{u}_i\cdot b_i \\ &= \vec{u}^* \cdot b_i \end{aligned}\]As requested.
For each $i$ the sender sends $\bar{u}_i$, a vector of length $N$, so over all the communication in this step is $\lambda N$ bits.
After this step the receiver updates its matrix $W$ into matrix $W’$ whose $i$th column is $\vec{w}_i’$. Also, the sender updates its matrix $U$ so now it all has the same column $\vec{u}^*$. In fact, let the sender foget about the matrix $U$ and just store the vector $\vec{u}^*$. So now it holds that $W’  V = \vec{u}^* \times \vec{b}^T$ where $\vec{b}^T$ is just $\vec{b}$ written as a row vector. The following can be helpful to visualize the foregoing matrix equality:
\[\begin{pmatrix} \bigg\vert & \bigg\vert & \dots & \bigg\vert \\ \vec{w}_1' & \vec{w}_2' & \dots & \vec{w}_\lambda' \\ \bigg\vert & \bigg\vert & \dots & \bigg\vert \\ \end{pmatrix}  \begin{pmatrix} \bigg\vert & \bigg\vert & \dots & \bigg\vert \\ \vec{v}_1 & \vec{v}_2 & \dots & \vec{v}_\lambda \\ \bigg\vert & \bigg\vert & \dots & \bigg\vert \\ \end{pmatrix} = \begin{pmatrix} \vec{u}^*_1 \\ \vec{u}^*_2 \\ \vdots \\ \vec{u}^*_\lambda \\ \end{pmatrix} \times \begin{pmatrix} b_1 & b_2 & \dots & b_n \end{pmatrix}\]Also notice that $\vec{u}^* \times \vec{b}^T$ is a “Tensor Product” which simply means that the result is a matrix $M$ such that $M_{i,j}=\vec{u}^*_i \cdot b_j$.
Switching Roles
This is the trickiest part, or at least it was the hardest for me to get initially when I read about IKNP, though it isn’t very technical there is something beautiful about it when you get it.
Following the last equation from last states that: $W’  V = \vec{u}^* \times \vec{b}^T$ Therefore, for each row $j$ of the matrices $W’,V$ it holds that:
\[W'_{j,\cdot}  V_{j,\cdot} = \vec{u}^*_j \cdot \vec{b}^T\]In other words, it means that the row vector reulsting from subtracting the $j$th row of $V$ from the $j$th row of $W’$ equals the $j$th element from vector $\vec{u}^*$ (that is, $\vec{u}^*_j$) multiplied by row vector $\vec{b}^T$.
Notice that $W’_{j,\cdot}$, $V_{j,\cdot}$ as well as $\vec{b}$ are vectors in $\mathbb{F}_2^\lambda$ and $\vec{u}^*_j$ is just a single bit. In this stage for each $j\in \{1,\ldots,N\}$:
 The receiver sets $\vec{r}_0^j \leftarrow W’_{j,\cdot}$ and $\vec{r}_1^j \leftarrow W’_{j,\cdot} + \bar{b}^T$, two messages of length $\lambda$.
 The sender sets a bit $c_j = \vec{u}^*_j$.
Now if you look closely, the sender, who holds $V_{j,\cdot}$ actually holds $\vec{r}_{c_j}^j$. This looks very much like the original OT correlation, right?
 The “receiver party” holds two message $\vec{r}_0^j,\vec{r}_1^j$.
 The “sender party” holds a choice bit $c_j$ as well as $\vec{r}_{c_j}^j$.
But we got the roles flipped! Because our receiver party is the that holds two messages and the sender party holds a choice bit and the message corresponding to its choice. So we managed to get an OT correlation where the receiver of the original OTs is actually a sender and the sender of the original OTs became a receiver.
This feature of IKNP is important to mention. To OT extensions works by extending a small number of OTs into a large number of OTs while flipping the roles between the sender and the receiver.
One last property of the resulting OTs, which will take us to the last and final step of IKNP, is that we were seeking Random OTs. However, in the resulting OT correlation (of the $\vec{r}_0^j,\vec{r}_1^j$ and choice bit $c_j$) it holds that the OT messages $\vec{r}_0^j,\vec{r}_1^j$ are not so random. Notice that for all $j\in\{1,\ldots,N\}$ the subtraction of the messages satisfies:
\[\vec{r}_1^j  \vec{r}_0^j=\vec{b}\]Notice that a truly random OT will not have this property, since for random messages, their difference is also random. Using a more professional jargon, the messages $\vec{r}_0^j$ and $\vec{r}_1^j$ are correlated with $\vec{b}$ being the correlation. Therefore the last step in the IKNP construction is breaking the correlation.
Breaking The Correlation
This step is quite technical, at the beginning of it:
 The party $P_R$ is the OT receiver but the OT extension sender holds $N$ pairs of messages $((r_0^1,r_1^1),\ldots,(r_0^N,r_1^N))$.
 The party $P_S$ is the OT sender but the OT extension receiver holds $N$ choice bits $c_1,\ldots,c_N$ as well as messages $r_{c_1}^1,\ldots,r_{c_N}^N$ corresponding its choice bits.
 From previous steps, $P_S$ also holds a vector $\vec{b}$ such that $r_1^ir_0^i=\vec{b}$ for all $i\in\{1,\ldots,N\}$, therefore the current OT correlations are not random, since a truly random pairs of OTs will likely not satisfy this relation.
To break the correlation the IKNP suggests one of the following options:
If we assume the “Random Oracle Model” where both parties have access to a random function $\mathcal{O}:[N]\times\{0,1\}^\lambda \to \{0,1\}^\lambda$ then:
 The sender computes $h_j^i = \mathcal{O}(i,r_j^i)$ for $j\in\{0,1\}$ and $i\in\{1,\ldots,N\}$.
 The receiver computes $h_{c_i}^i = \mathcal{O}(i,r_{c_i}^i)$ for $i\in\{1,\ldots,N\}$.
This resulting correlations where $P_R$ holds $(h_0^1,h_1^1),\ldots,(h_0^N,h_1^N)$ and $P_S$ holds $(c_1,h_{c_1}^1),\ldots,(c_N,h_{c_N}^N)$ is a truly random set of $N$ OT correlations, that is since a random oracle, being a truly random function, is very unlikely to preserve the correlations.
If we prefer not assuming the “Random Oracle Model” then IKNP also propose the usage of “Correlation Robust Hash Functions”, a cryptographic construct which can be thought of as just a regular function with the restriction (that is satisfied by almost all reallife construction of hash functions), then for a random “offset” $R$ the hash function $H(x+R)$ is pseudorandom. In their paper they show that Correlation Robust Hash Functions are sufficient to break the correlation.
The Protocol
If we combine all previous steps we get the following protocol.
 The sender $P_S$ and receiver $P_R$ perform $\lambda$ “BaseOTs”.
 So The sender holds $\lambda$ pairs of $\lambda$bits long messsages $(m_0^1,m_1^1),\ldots,(m_0^N,m_1^N)$.
 The receiver holds $\lambda$ choice bits $b_1,\ldots,b_N$ and the messages corresponding to its choice bits $m_{c_1}^1,\ldots,m_{c_N}^N$.
 Extending the input: Each party applies $\sf PRG$ to all its messages.
 The sender computes $M_j^i={\sf PRG}(m_j^i)$ for $j\in \{0,1\}$ and $i\in\{1,\ldots,n\}$.
 The receiver computes $M_{b_i}^i={\sf PRG}(m_{b_i}^i)$ for $i\in\{1,\ldots,n\}$.
 Renaming Variables:
 The sender, $P_S$ sets $\vec{u}_i=M_1^iM_0^i$ and sets $\vec{v}_i=M_0^i$ as well as matrices $U,V$ where their $i$th column is made of $\vec{u}_i$ and $\vec{v}_i$ respectively.
 The receiver, $P_R$ sets $\vec{w}_i = M^i_{b_i}$ as well as a matrix $W$ where its $i$th column is made of $\vec{w}_i$.
 Derandomization:
 The sender computes $\bar{u}_i=\vec{u}^*  \vec{u}_i$ and sends $\bar{u}_i$ to the receiver for a chosen $\vec{u}^*$.
 The receiver updates its $\vec{w}_i’ \leftarrow \vec{w}_i + \bar{u}_i\cdot b_i$ and sets a new matrix $W’$ where its $i$th column is made of $\vec{w}_i’$.
 Switching Roles:
 The receiver sets $\vec{r}_0^j \leftarrow W’_{j,\cdot}$ and $\vec{r}_1^j \leftarrow W’_{j,\cdot} + \bar{b}^T$, two messages of length $\lambda$.
 The sender sets a bit $c_j = \vec{u}^*_j$.
 Breaking The Correlation (using a random oracle):
 We assume both parties have access to a random oracle $\mathcal{O}$.
 The sender computes $h_j^i = \mathcal{O}(i,r_j^i)$ for $j\in\{0,1\}$ and $i\in\{1,\ldots,N\}$.
 The receiver computes $h_{c_i}^i = \mathcal{O}(i,r_{c_i}^i)$ for $i\in\{1,\ldots,N\}$.
Adversarial Model
The above protocol is secure against semihonest adversaries. Without getting too deeply into the definitions, it means that “if everyone follow the protocol, no inadvertant leakage of information will occur”. But what happens if one of the parties decides to deviate from the protocol? IKNP did address this issue in their paper, but before jumping into it let’s think – what can go wrong? The parties can only cheat wherever they communicate and the only communication in our protocol (assuming the BaseOTs are secure) is in the derandomization phase. The party $P_S$ has to use the same $\vec{u}^*$ when computing all the correction vectors $\bar{u}_i$, it may decide not to do so, which might help $P_S$ guessing what the resulting message of $P_R$ or some information about it. To ahiceve security against malicious adversaries we have to ensure that $P_S$ is consistent in the way it computed the correction vectors. I’ll try explaining very briefly what IKNP did to achieve security against malicious adversaries. IKNP solve this problem with the classical approach of “cutandchoose” instead of running the algorithm once, they run it $2^\sigma$ times for some security parameter $\sigma$ and then they let the $P_R$ party ask from $P_S$ to “open” their secret $U,V$ matrices in exactly $2^{\sigma1}$ of the executions. If $P_S$ cheated on a significant portion of the executions, it be caught with good probability and therefore it is safe to assume that the unopened executions are safe.
Notice that this “cutandchoose” approach is simply but is very expensive in terms of computation and communication.
Concluding Remarks
The protocol of IKNP has been a foundational building block for further research on efficient OT extensions that came afterwards. The computational cost of IKNP per OT is negligible compared to the heavy machinery incorporated in running a BaseOT protocols. However, as we have seen towards the end of the post, the main drawbacks of IKNP construction are:
 High communicational cost, for $N$ extended OTs, our protocol sends $\lambda N$ bits to achieve semihonest security.
 Impractical malicious security solution. The “cutandchoose” technique is very expensive and the attempt to achieve security against malicious adversaries in an efficient way by performing an efficient variant of the consistency checking protocol was the main motivation for many followup works.
Thank you for reading this post! I did take a lot of time writing it, so if it helped you somehow, I’ll be happy to know.
I’ll also be happy to hear your thoughts, questions and corrections, so feel free to reach out in any of the ways listed at the bottom of the page