
MissionStatement


RISC Seminar
2011 Seminars
Workshop on Computational Number Theory
on the occasion of Herman te Riele's retirement from CWI Amsterdam.
CWI Amsterdam, The Netherlands
December 1  2, 2011
For more information, please visit the homepage of the event

Talk by Prof. Venkatesan Guruswami organized in collaboration with the CWI groups Algorithms, Combinatorics and Optimization (PNA1) and Algorithms & Complexity (PNA6).
Date:  November 25, 2011, 16:0017:00 
Location:  Room L017, CWI 

Prof. Venkat Guruswami (Carnegie Mellon University):
Bridging Shannon and Hamming: Codes for computationally simple channels
(click to see abstract)
Abstract:
Coding theory has had two divergent schools of thought, dating back to
its origins, based on the underlying model of the noisy channel.
Shannon's theory modeled the channel as a stochastic process with a
known probability law. Hamming's work suggested a worstcase model,
where the channel is subject only to a limit on the number of errors it
may cause. These two approaches share several common tools, however in
terms of quantitative results, the classical results in the harsher
Hamming model are much weaker.
In this talk, we will discuss a line of research aimed at bridging
between these models. We will begin by surveying some approaches that
rely on setup assumptions (such as shared randomness) to construct codes
against worstcase errors with information rate similar to what is
possible against random errors. We then turn to our results for
computationally bounded channels, which can introduce an arbitrary set
of errors as long as (a) the total fraction of errors is bounded by a
parameter p, and (b) the process which adds the errors is sufficiently
``simple'' computationally. Such channel models are wellmotivated
since physical noise processes may be mercurial, but are not
computationally intensive. Also, as with codes for worstcase errors,
codes for such channels can handle errors whose true behavior is unknown
or varying over time.
We will describe an explicit construction of polynomial time
encodable/decodable codes with rate approaching Shannon capacity 1h(p)
that can correct an arbitrary error pattern with total fraction of
errors bounded by p, provided the channel's action is oblivious to the
codeword. We will hint at an extension to channels limited to online
logarithmic space that gives efficient codes with optimal rate that
enable recovery of a short list containing the correct message. (A
similar claim holds for channels admitting polynomial size circuits,
assuming the existence of pseudorandom generators.) Our results do not
use any shared randomness or other setup assumptions.
Based on joint work with Adam Smith.
[↑]

RISC Seminar on Extractors and Privacy Amplification
Organized in collaboration with the Intercity Number Theory Seminar.
Date:  November 11, 2011, 11:4516:45 
Location:  Room L120, CWI 
11:45 
12:30 h 
Ariel Gabizon (Technion, currently visiting CWI):
Extractors : Background, Applications and
Recent Constructions
Abstract:
Randomness extractors are functions whose output is guaranteed to be
uniformly distributed, given some assumption on the distribution of the
input.
The first instance of a randomness extraction problem comes from
vonNeumann who
gave an elegant solution to the following problem: How can a biased coin
with unknown
bias be used to generate `fair' coin tosses? In this case the input
distribution consists of independent
identically distributed bits. Since then many families of more complex
input distributions
have been studied. Also, the concept of randomness extraction has proven to
be useful for
various applications. The talk will give some background on extractors and
review applications
and techniques used in recent constructions of extractors.
[↑]

14:00 
14:45 h 
Gil Cohen (Weizmann Institute):
NonMalleable Extractors with Short
Seeds and Applications to Privacy Amplification
Abstract:
Motivated by the classical problem of privacy amplification, Dodis and
Wichs (STOC '09) introduced the notion of a nonmalleable extractor,
significantly strengthening the notion of a strong extractor.
A nonmalleable extractor is a function nmExt that takes two inputs: a weak
source W and a uniform (independent) seed S,
and outputs a string nmExt(W,S) that is nearly uniform given the seed S *as
well* as the value nmExt(W,S') for any seed S' \neq S
that may be determined as an arbitrary function of S.
In this work we present the first unconditional construction of a
nonmalleable
extractor with short seeds.
By instantiating the framework of Dodis and Wichs with our nonmalleable
extractor, we obtain the first 2round privacy amplification protocol
for minentropy rate 1/2 + delta with asymptotically optimal entropy
loss and polylogarithmic communication complexity. This improves the
previously known 2round privacy amplification protocols: the protocol
of Dodis and Wichs whose entropy loss is not asymptotically optimal, and
the protocol of Dodis, Li, Wooley and Zuckerman whose communication
complexity is linear and relies on a numbertheoretic assumption.
Joint work with Ran Raz and Gil Segev.
[↑]

15:00 
15:45 h 
Stefan Dziembowski (Warsaw University & University of Rome ``La Sapienza'')
LeakageResilient Cryptography From the InnerProduct Extractor

16:00 
16:45 h 
Christian Schaffner (University of Amsterdam, CWI)
Randomness extraction and expansion in the quantum world
Abstract:
Randomness extraction is a fundamental task in cryptography, where it is
intimately connected with the problem of privacy amplification. In this
talk we will survey the specific challenges posed by this task in the
setting where an adversary may hold *quantum* information about the
source and give an overview over the known results in the area.
In the last part, we touch on recent joint work with Fehr and Gelles. We
demonstrate that quantum mechanics allows to expand some initial
randomness in a secure way even if the used devices are manufactured by
the adversary.
[↑]

Talk by Yael Tauman Kalai (Microsoft Research)
Abstract:
Traditionally, cryptographers assume that the secret keys are totally
hidden from the adversary,and that the cryptographic algorithms are run
"correctly'' with no errors. Unfortunately, in reality there are various
realworld physical attacks, which render this assumption false. These
include leakage attacks, such as, timing, power, and acoustic attacks,
which allow an adversary to (continually) leak information about the
secret keys. In addition, there are various tampering attacks,
including heat and EM radiation attacks, which allow an adversary to
(continually) tamper with system. Recently, there has been a vast and
growing body of work, which try to secure cryptographic systems against
leakage attacks. However, there have been quite few results trying to
protect systems against tampering attacks.
In this talk, I will focus on the goal of trying to protect
cryptographic systems against tampering attacks. I will focus on two
recent results. One which shows how to construct encryption and
signature schemes that are secure against an adversary that continually
leaks and tampers (arbitrarily) with the secret key. The second result
allows the adversary to tamper with the algorithm itself (as opposed to
tamper only with the secret key), but guarantees security only against
specific monotone faults.
The first result is based on joint work with Bhavana Kanukurthi and Amit
Sahai, and the second result is based on joint work with Allison Lewko,
and Anup Rao.
[↑]
The 5th International Conference on Information Theoretic Security, ICITS 2011, has been organized by CWI's Cryptology Group.
Dates:  May 21  24, 2011. 
Location:  CWI & Trippenhuis, Royal Dutch Academy of Arts and Sciences (KNAW) 
General chairs:  Ronald Cramer (CWI & Leiden University) and Krzysztof Pietrzak (CWI) 
Program chair:  Serge Fehr (CWI) 
For more information, please refer to the official website of the event.
RISC Seminar
Date:  Wednesday 26.01.2011 
Location:  room L017, CWI 
14:00 
14:40 h 
Daniel Wichs (NYU)
:Separating Succinct NonInteractive Arguments From All Falsifiable Assumptions
Abstract:
We study succinct computationally sound proofs (arguments) for NP,
whose communication complexity is polylogarithmic the instance and
witness sizes. The seminal works of Kilian '92 and Micali '94 show
that such arguments can be constructed under standard cryptographic
hardness assumptions with four rounds of interaction, and that they be
made noninteractive in the randomoracle model. The latter
construction also gives us some evidence that succinct non interactive
arguments (SNARGs) may exist in the standard model with a common
reference string (CRS), by replacing the oracle with a sufficiently
complicated hash function whose description goes in the CRS. However,
we currently do not know of any construction of SNARGs with a formal
proof of security under any simple cryptographic assumption.
In this work, we give a broad blackbox separation result, showing
that blackbox reductions cannot be used to prove the security of any
SNARG construction based on any falsifiable cryptographic assumption.
This includes essentially all common assumptions used in cryptography
(oneway functions, trapdoor permutations, DDH, RSA, LWE etc.). More
generally, we say that an assumption is falsifiable if it can be
modeled as an interactive game between an adversary and an efficient
challenger that can efficiently decide if the adversary won the game.
This is similar, in spirit, to the notion of falsifiability of Naor
'03, and captures the fact that we can efficiently check if an
adversarial strategy breaks the assumption.
Our separation result also extends to designated verifier SNARGs,
where the verifier needs a trapdoor associated with the CRS to verify
arguments, and slightly succinct SNARGs, whose size is only required
to be sublinear in the statement and witness size.
Joint work with Craig Gentry
[↑]

14:50 
15:30 h  Elette Boyle (MIT):
Fully LeakageResilient Signatures
Abstract:
A signature scheme is fully leakage resilient (Katz and
Vaikuntanathan, ASIACRYPT '09) if it is secure even in a setting where
an adversary may obtain bounded (yet arbitrary) leakage information
on {\emph all intermediate values} that are used throughout the lifetime of
the system. This is a strong and meaningful notion of security that
captures a significantly wide range of sidechannel attacks.
One of the main challenges in constructing fully leakageresilient
signature schemes is dealing with leakage that may depend on the
random bits used by the signing algorithm, and constructions of such
schemes are known only in the randomoracle model. Moreover, even in
the randomoracle model, known schemes are only resilient to leakage
of less than half the length of their signing key.
In this work, we construct the first fully leakageresilient signature
schemes without random oracles. We present a scheme that is resilient
to any leakage of length $(1o(1))L$ bits, where $L$ is the length of
the signing key. Our approach relies on generic cryptographic
primitives, and at the same time admits rather efficient
instantiations based on specific numbertheoretic assumptions. In
addition, we show that our approach extends to the continualleakage
model, recently introduced by Dodis, Haralambiev, LopezAlt and Wichs
(FOCS '10), and by Brakerski, Tauman Kalai, Katz and Vaikuntanathan
(FOCS '10). In this model the signing key is allowed to be refreshed,
while its corresponding verification key remains fixed, and the amount
of leakage is assumed to be bounded only in between any two successive
key refreshes.
Joint work with Gil Segev and Daniel Wichs.
[↑]

15:40 
16:20 h  Krzysztof Pietrzak (CWI):
Subspace LWE & Applications
Abstract:
The learning with errors (LWE) problem asks to distinguish "noisy"
inner products of a secret vector with random vectors from
uniform. Recently, this problem has found many applications in
cryptography. It's appeal stems from two facts:
1. The hardness of the problem is very well understood. "Regev's LWE"
is as hard as worst case lattice problems. The learning parities with
noise (LPN) version is equivalent to decoding random linear codes.
2. Constructions based on LWE (and in particular LPN) tend to be
extremely efficient, often just requiring few bitlever operations.
In this talk I will first introduce a (seemingly) much stronger
"interactive" assumption called subspace LWE, but which can be proven
to be equivalent to the original LWE problem. This result will
immediately imply that the LWE/LPN problems are surprisingly robust
with respect to tampering with the secret and/or the randomness used
to generate the samples. This robustness directly translates to
stronger security guarantees (e.g. it implies security against a broad
class of relatedkey attacks) one can give for cryptosystems proven
secure under the standard LWE assumption.
In the second part of the talk I'll show simple and extremely
efficient constructions of authentication protocols and message
authentication codes (MACs) whose security can be reduced to subspace
LPN. These new schemes are an order of magnitude more efficient than
solutions based on AES, and unlike AES or any other practical schemes
know to date, are based on a well known hardness assumption. This part
of the talk is based on join work (to appear at Eurocrypt 2011) with
Eike Kiltz, David Cash,
Abhishek Jain and Daniele Venturi.
[↑]



