FC05 – Financial Cryptography and Data Security
Roseau, Commonwealth of
Dominica
February 28 – March 3, 2005
Fraud with Asymmetric Multi-hop
cellular Networks – Gildas Avoine
Protecting Secret Data from Insider
Attacks – David Dagon, Wenke Lee,
Countering Identity Theft through
Digital Uniqueness, Location Cross-checking and Funneling
Keynote: Trust and Swindling on the Internet – Bezalel Gavish
Time Capsule Signature – Yevgeniy
Dodis and Dae Hyun Yum
Time Capsule Signature Yevgeniy Dodis and Dae Hyun Yum
Policy-based Cryptography and
Applications – Walid Baga and
Stuart Stubblebine - Countermeasures
Mike Szydlo – Context-aware
phishing: risks and counter-measures
Steve Meyers – a proposed protocol
Richard Clayton – More tools that
will probably fail
Testing Disjointness of Private
Datasets – Aggelos Kiayas and Antonina Mitrofanova
RFID traceability: A multiplayer
problem – Gildas Avoine and Phillipe Oechslin
Supporting Financial Transactions
Risk Assurance for Hedge Funds using
Zero Knowledge Proofs - Michael Szydlo
Systems, Applications, and Experiences
Securing Sensitive Data with the
Ingrian DataSecure Platform - Andrew
Koyfman
Ciphire Mail Email Encryption and
Authentication - Lars Eilebrecht
Keynote: Lynne Coventry - Usable Security: A conundrum?
Small Coalitions Cannot Manipulate
Voting - Edith Elkind and Helger Lipmaa
Efficient Privacy-Preserving
Protocols for Multi-Unit Auctions - Felix Brandt and Tuomas Sandholm
Event Driven Private Counters - Eujin Goh and Philippe Golle
Panel – It’s the Economics, Stupid. The Economics of Information Security – Allan Friedman
Bezalel Gavish: An
economically-motivated anti-spam proposal
Paul Syverson: Why identity theft is
about neither identity nor theft
Sven Dietrich: Economics of Denial
of Service Attacks
Richard Clayton: Stupid Economics
Weaknesses in the wireless network
Wireless networks with nodes and stations
Single hop cellular: node à station1 à station2 à node
Multi-hop networks: each node is also a router
Multi-hop cellular: each mobile station tries to route to
Problem: how to encourage mobile nodes to properly forward traffic
Scheme
Lightweight
Small-cheating is possible, can detect and punish large cheating
Originators charged for packets sent
Intermediaries are rewarded statistically
Operator, monitors
Each user symmetric key with operator
Routing
If can reach station, send to station
Else, forward to another node
Has a reward
Keeps identity of sender
Encrypted MAC, checksum
Also used for probabilasticaly claiming reward by forwarders
Attack 1 – fake identity-based
Two attackers talk freely inside cell
Proper: two nodes should talk through bay station, even if they are inside same cell
Attack: recipient keeps the message, don’t forward it
Solution: force message authentication à each node has to talk to the bay station to find out about the node
Attack 2 – recovering user’s secret key
User can’t check MAC
BUT – reward mech can be used as an oracle: true or false if I have the key
Attack is a function of reward threshold, key length
[Proof]
Claim: h is
between 50 and 75%
Basic problem: key is used in authentication
Solution: disturbing the input distribution
Occasionally send random values, with a checksum ??
Better: use a hash of the key
Protecting Secret Data from Insider Attacks
Perimeter defenses are not sufficient
Assume penetration, or insider attacks
Only looking at storage issues on compromised systems
Observation: large size offers some protection in the real world. True in the info world?
Table T has following properties: n records, r bits, m size of record, d fraction of stealable, k is a security metric
Primitives: init, add, delete and find
Finding data in table should be hard
Solution 1:
Init with randoms
Avoid linear scans: increase m, distribute x so that d approaches 1
Need the whole table to get most of the data
Increase size such that memory access is required
Force crackers to work with disk I/O
VAST storage system
1 tera-scale table, padded
Insertion:
Encrypt to make message X look random (don’t care about strength of crypto)
Store secret shares separately
Recovery: use lagrange interpolation
Can heal from any missing shares
Analysis:
Linear scans useless
Search space is huge for guessing
Stealing portion of the table won’t work
Brute force with entire table is really really slow
Arms race of memory vs. disk
Speed difference between memory and disk could be seen as a “poor man’s one-way function”
Reliability: there will be some collisions, but we can afford that with the space
Can calculate based on salts, terabytes, etc
Speed: 300 disk-lookups/second
Adding more drives actually increases disk access speed: each arm is own lookup
Simpson: can you realistically imagine some one using this?
A: Scaling issue if you force everyone in a large disk to use I/O
Engineering problem of simultaneous access
Rotational speed is a constant
Business case: increasing risk of losing data is nontrivial
Q: Why not encrypt each field with it’s own key
A: Goal of preventing partial file from being useful
-
“Unauthorized use and exploitation of individual’s identity-corroborating information”
Phishing – even “phishing kits”
Keylogging – i.e. Trojan Bankhook
Clearinghouse database
2002 – 3 million fraudulent loans,
Unclear whose responsibility to solve it
Individuals poorly positioned to handle the problem
Consumer credit agencies are in position to help
BUT – victims of data attacks themselves
- delayed notice b/c of investigations
-Rumor: executives sold stock before notification
Underlying problems
Ease of duplicating data and credentials
Hard to detect when data is duplicated
No back channel to notify subject when new credentials are granted
Proposal: general authentication architecture
Use physical location
Background: cell phones with 911 user location
Each person has a personal device which broadcasts location
Can be good identifier: biometric/PIN
Uniqueness through location
1 signal: assume proper user
0, 2+ signals à system error
ID verification at point of use
At ID assertion, check for local phone signal
Credit card authorization example
User: CC
Merchant send CC auth: CC, ID, transaction
Ccauth sends merchant: conditional auth, LVS, token
Merchant sends Ccauth….
Security Analysis
Theft of device
Cloning à multiple locations
Theft, clone, return
Same location attack:
Personal device could communicate a nonse
Needs a legal & economic solution
Privacy issues with location
Technical solution to whole issue
Paul: issues with implementation of real-world location verification?
A: part of the threat model, but assume that system works
Q: Can’t I take over a cell phone over too? Just pushing the problem to a lower tech devices?
A: Redundancy of personal device with identity credentials, but out of luck of it’s compromised well
Simple compromise: phone cloned, or turn off the device à error detected
Need more complex attack: cloned, but leave it off most of the time
Q: If the mobile device works, then why do we need the credential?
A: Free market: everyone can have their own credit card, etc.
Simpson: Email answer-back is a similar issue, and I know when some one has taken over my account.
Victims are seen as perps when they try to respond
This system seems vulnerable to making the
When system fails, if it’s stronger, then
Fraud statistics
IC3 website complaints growing (under-estimates)
Most cases are $100-1000 complaints
US is the center of 80% of fraud
Data collected by self-reported victims
Trust and trust creation
Nowak and Sigmund (2000, Science) – basis of all human systems of morality
Building trust – has to be earned
Personal interactions and impressions (repeated business & interaction)
History, reputation, social capital
References, intermediaries, transitive closure
Contracts, Payment after delivery
[… fairly elementary discussion of trust and auction fraud]
Fraud methods
Delayed shipping à profit off a 60 day float of payment: interest
Bait & switch with upgrade
Delayed
shipment with triage – How is this fraud??
Shipped empty box, inferior product, damaged, etc
False escrow companies that use URL transformations a la phishing
Partners bid up items
Fast turn-around scam
Build up a good reputation by selling many cheap, easy to ship items
There is a 15-20% price premium for positive past history
Auction sites publicize that fraud is very rare: 10-20 thousand auctions for every instance
Very little research was done to test that claim
Can’t use data from actual auction houses: need to get your own data
Empirical study
Online questionnaire to 1300 auction winners
10% responded
Biases:
Most people will not admit that they were swindled (look stupid) à under-reporting
OR – people are angry, will report à accurate
Results:
Of responses: 21% are diassitified
Of surveys sent out: 2% are dissatisfied
Not exactly fraud, 2% > .01%
In the process of a much larger study
Reputation premium?
Statistical methods for signaling fraud
Why don’t auction houses do more to limit fraud
Liability à if they say they do nothing, hard to be angry at them for letting bad guys through in your case
Any deviation from caveat emptor creates potential
Assymetry between buyers and sellers because sellers are the
sine qua non of auctions
Site owners favor
Security level is 2^L for a key length L
Can forge a signature in O(2^L)
Problem: want a shorter sig length with same security
Abe-Okamato (1999) – shortening sigs for message recovery
Most dig sig require you to set the size of your message box
Need random bits, or cannot recover over-sized segment
Better scheme: can sign whole message, but can’t recover message beyond box
[attended to some work for Jean]
Standard: validity of signature is determined at generation, and never changes
Can we have a future sig that is not valid now, but becomes valid at future time t
New problem: time capsule signature
After time t, incomplete sig becomes valid sig
Focus on absolute time
Two different sigs are computationally indistinguishable
If sender wants, they can make signature valid before time t
BUT receiver can’t use sig before time t
Time server needs Alice’s signature
Alice cannot make a non-hatchable signature that looks good to bob
Uses Identity-based trap-door hard-to-invert relation
What is this good for?
“If anyone has any ideas what this could be used for, let me
know”
Policies are a fundamental security concept: access control, privacy control
Also cryto – encryption (confidentiality) and signature (integrity and authentication)
Hard to combine
Policy = monotonic logical expressions of ands and ors
Conditions are defined through trusted authority and assertion
Policy-based encryption
Takes a message m, a policy a, and outputs c
Decrypt c with policy a to get m, if policy allows
Example: client sends encrypted request to service provider with policy for privacy certification
SP can only decrypt if approval from cert
Encryption
Need to translate logical operators into mathematical equation for a key
…
Paul: what does this stuff offer that current stuff trust negotiation doesn’t
A: Full integration with a single cyphertext, that prevents having to divide negotiation and crypto
No negotiation, just usable or not usable
Attack that combines social engineering and technology
Get valuable information
Install software
Spam users with a convincing reason to access a misleading website
Often use fake addresses, fake SSL certs
2005 RSA survey with 1000 respondents shows that it is a loss of confidence in e-commerce
Get this survey from the RSA PKI conference
23% feel more vulnerable to ID theft
43% not willing to hand out personal info at all
15% of users have a single password
66% have fewer than 5 passwords
Who is at risk?
Experts are fooled
Getting harder to distinguish b/n real and fake emails
Pharming – modify host file in the infected computer
Estimates of 1.2 BILLION (gartner) to 150 million worldwide
(doesn’t include opportunity cost)
Graph from anti-phishing.org shows steady increase
No universal fixes from a technical perspective
General advice: never click on links of unsolicited emails
Good: promotes awareness of insecurities
BUT – eliminates usefulness of links in email,
Conflicting – ie Citi saying don’t click links, but they send out emails
Fool the phishing cite: give an incorrect password first
Good: works at first
BUT: won’t work for long
Merchant perspective
Email is risky and uncontrollable
Don’t want to give up direct channels
Consumer advice is not enough
Web spoofing has been early “Web spoofing: An internet con game” (felton, dean, et al 1997 baltimore)
Classic: use java script to redraw a web and spoof entire web on demand
Unique attack: it’s against the USER, not the COMPUTER
Man in the middle
Vast majority today are spam-driven, not MitM
Less technical over time
Working group: SF electronic crime task force report
Spoof-guard
Technologies
Authentication, filtering (spam, preventing data delivery)
Need: a link between the spam filter and the web browser
Really want to know whether the link clicked on is a piece of spam
Build
this into my reputation system!!
See also: Standform Spoofguard
Client-side defense against web-based identity theft
Personal security questions are the Achilles heel of many auth mechs
Very vulnerable to phishing
Difficulty of guessing / discovery
How widely dispersed in public databases?
Need to combine usablity and security: memorable, applicable, repeatable
Context dependent: phone vs. web form
Adaptive Phishing
Graph representation: multiple starting points, each with a route to target state
“Context-aware” phishing
Standard: “I’m from the phone company, may I come in”
Context aware: cuts the line, then comes in to fix it
Trust decisions: often the server is attacked
Tech: client authentication
Authentication
Long term robust defense – a “safe” mode of the OS when entering passwords
Implicit authentication assumptions do NOT hold in the real world
Easy to duplicate a site
Hard for companies to enforce trademarks online
Easy to direct lots of customers to a fraudulent site
High reward, low risk
My stuff: phrase users actions in risk/reward terms
Improperly transported security models
Crypto gives us password-authenticated key exchange (PAKE)
But: need PAKE, shouldn’t be spoofed
DPD
Display a separate image for each character of the password entered that the user can authenticate
No hardware required
As secure as PAKE or SSL
Not immune to MIM evil monitor attack
Usability
More complex
BUT – don’t have to remember the pics, just have to recall the pics
Fishing goes back to people trying to steal AOL attacks in the mid-90s
Con artists are really good
Password is really bad as protocol mechs
Even if Alice can prove her ID, no binding to action (alice pays gas bill, but phisher steals 10000 pound)
Even with crypto, have to trust intermediary software
“We’ve got a bug in the software, go to bankname.newsoftware.com”
Client certificate stops MiM
Stops banking from cyber-cafes, account aggregation
BUT – phisher could give you revised certificates
Current browser – can’t rely on ANYTHING on the screen to be credible
Turning off all the dangerous bits would probably also break the banks website
Lots of small tools
Realtime browser checks on the website
One-time passwords
Client certs
BUT:
Bad guys are better at cooperating than the good guys i.e. botnets
Avi: shift in consumer land away from the Internet. Is this going to be horrible?
Stuart: businesses will find an efficient solution
Mike: banks have a huge incentive to get people online, so they want to encourage people to go online
Avi: no capability to protect peoples information, but no it’s more active
Richard: move away from links in emails
C: Scotia uses SSL3
C: Educate users may be the only solution – links are maps, not hrefs
Drew: phishers
Paul: transport-layer info is BAD for all authentication. Way more problems than it solves
Gavish: People were afraid of credit cards, but they realized it was in their incentive. When the banks care, they will intervene
Jean kicks ass credit cards were adopted when the Supreme Court stepped in and re-aligned risks
Paul: Credit cards have been around for a while, but only recently passed checks
Jean: why don’t email services do phishin protection
Stuart: it’s a cost to them – will only do as much as they are incented to
Look at who is losing money: maybe a bank consortium will do something
BUT – banks are slow to act, and tend to only look into own interests
Richard: a fair number are being pulled out of the stream by anti-spam emails
Phishing is working for them, so it must be working
BUT – more “win the lotto” spam
ISPs could get customers list if they wanted
Simson: Why not use host information for security (other than TOR-breaking)? How would it decrease security
Paul: Use non-secrets as pseudo-authenticators
We can spoof network addresses, it’s already broken
It prevents people from protecting themselves, or getting data they want
.rhosts (unix term) are a
Comment: whois task force is engaging in big changes
Trying to prevent
Q: Is there any data about how much you need to get to bite?
Coupon could refer to a gift certificate, admission card, etc
Right to claim a service or good
Can only be spent for a limited number of times
A financial incentive (value?)
Or: 9 for price of 10
Can/not be split
Anonymous
Why use them?
Customers don’t have to pay full price
Vendors receive payment in advance
Store loyalty à reward repeat business
Customer retention (lock-in)
à very suitable for digital content providers
Also,
people won’t use customer support (returns, etc) which is adrawback of many
Requirements
Security (vendor): Unforgeablity, double-spending protection, redemption limitation
Protection against splitting (vendor): only regular customers, prevent pooling
Privacy: unlinkability (between issue and redemption, or across redemptions
Building blocks
A signing scheme (camenisch/lysyanska 2002)
Uses a different exponent root for each signature
Issuing
Customer chooses random string, computes a binding factor, computes a value D wrt vendors public key
Vendor computes a blinded signature
Customer unblinds the signature
Spending a single coupon on a multi-coupon
Minimal diclosure: assurance that there is at least one unspent coupon, not multiple coupons
Unlinkability – vendor can’t decide if two single coupons belong to single multi-coupon
SOLN: show that single coupon is covered by a provable but undisclosed multi-coupon signature
Need to
System allows for issuance of multi-coupons
Unlinkable for the vendor
Q: Notion of context-extraction signature has been around for a little.
C: Yes, this is related
Alice has dataset Sa of size Na, bob with Sb of Nb
Looking for intersections BUT – want to preserve privacy
Only one bit should be revealed: yes or no
Private Intersection Predicate Evaluation (PIPE)
PIPE1 – unrestricted size, linear in N
Use the size of the universe
PIPE2 – size limited by N,quadrative in S
“Superposed” encryption
Encrypt M to cyphertext C, then superpose it to C’
Can be decrypted twice to a “random” cyphertext of M*M’
Alice computes a polynomial based on her values
Based on the universe of crypto
Bob computes it based on every thing in his set
If he gets a zero, there is intersection
PIPE3
Also uses polynomial from PIPE2
Why is this interesting
Atomic operation
Useful for…??
Determines whether two people may want to further communicate
Q: Why use PIPE2 vs. the cert authority or circuit simulation?
A: Not looking for better bounds, just wanted to add the notion of super-posed encryption
… nature of RFID
Issues with RFID: privacy
Information leakage
Soln: identifier should not convey information
Traceability
If we want to address the new technology, we need to address these issues
Easier to track people with RFID than other technologies
Cannot be switched off
Can be invisible
Easy to analyze logs
Increasing range of
Physical solutions: blocker tags, kill the tags, faraday cage, etc
Software solution
Problem: authorized party can ID that tag, but unauthorized party cannot track it
RFID stack: physical layer, communication layer, application layer
Privacy needs to be ensured at each layer
Multilayer problem (Molnar, Juels, etc)
Application layer – ID protocol itself
Communication layer – medium access (collision avoidance)
Tags can’t communicate with each other – reader handle collision itself
Deterministic and probabilistic protocols
Physical layer: air interface – frequency, modulation, etc
Very open for eavesdropping
Threats from diversity of standards, radio fingerprints
Can’t just focus on the application layer
In practice, we need a crypto function at the communication layer
Trade-off: weak, cheap protocols or strong, expensive protocols
C: we can kill tags, and there are functional RFID jammers
Security problem
Attackers go for the keys, stored in weak media
Solution: use physics to generate keys that would prevent cloning – Physical Unclonable Systems (PUF)
Based on a complex physical system
Easy to evaluate
Hard to invert
Many inputs with random-looking outputs (challenge response pairs)
Deterministic
Unpredictable even for some one who has the function
Example – optical PUF
Shine a laser on a pattern, get a pattern from which you can get key
Angle
changes the outcome, does it keep the same key gen?
Similar robustness issues as biometrics
Noice is fatal
How secure are PUFs?
Physical cloning – just demand that the actual physical PUF is there
Electronic cloning
Get a full challenge-response space, or at least extract all the entropy
Need to quantify the security threat for the measurement attack
Use information-theoretic model of PUF
Q: How secure is a measurement like that if the measurements can be adaptive, i.e. look at outputs to pick best measurement
A: That would require modeling the actual physical process à very very complicated
Q: Possible number of states is limited – less brute force
A: Challenge space is 10^8, with several thousand bits of output space
Hedge funds: $1 trillian in assets
Manipulate risk
Some are hedged, others are leveraged
Structured like private partnerships
Exclusive, illiquid, high fee structure
Opaque instruments, lots of secrecy
BUT – need good risk information
Applications to mainstream financial engineering
Use ZK proofs to describe secret, committed portfolio
Investor can get real time information about real-time risk
Similar to the idea of a “risk server”
Secrecy and hedge funds
Less regulation à more exploitable flexibility
Exploits statistical arbitrage
Need to keep the find secret, because differences disappear as people learn and invest
BUT – transparency is important too
Enormous losses also possible
Long Term Capital Management
Less oversight fosters fraud
Portfolio risk factors
Common investment risks: firm earnings, geopolitics, laws, etc
Each asset exposed differently
Measuring risk
Asset allocation percentages: type, sector, region
Multi-factor models
Scenario analysis
LIMITS: models never capture all risks, can’t communicate everything to investor
Crypto components
Commitment of an integer
…
Focus on additive risk characteristics
Scheme
Asset commitment in exchange for contractual agreement
Investor gets risk proofs as well
Include a trusted third party for ease of verification, audit
Contract has to have the legal details of the fund
Recourse if ZKP can’t be maintained
Limitation on possible assets that can be invested in
Risk statements made explicit
Expressed as bounds
For commitments, need to have a paper trail to verify that they are true after the fact
Appealing to investors
Forces better-crafted investments
Useful for institutional investments
Q: Can I query this strategically to find out, ie if there are
A: Can lay out those bounds ahead of time
Q: What about moving the calculation to the third party, rather than the hedge-fund manager? The manager could manipulate things.
A: Not anticipating huge computation burden.
Make multiple, partial transactions anonymously
Example: treasury surveillance looking for drug money
Threshold of $10,000 à people start to make transactions that are almost that large à slippery slope
Want to allow smaller transactions anonymously, but let a cumulative threshold allow for un-anonymizing
Use probabilistic polling
Each transaction increases the shares that escrow agency has
As threshold is reached, very high probability that agency has enough shares to decrypt
Robust probabilistic information transfer
…
Roughly 20 years of trying to get secure email
S/MIME has been supported since 1998 – not being used
Hard to use: need your certs, and others
Secure email isn’t needed
What do we mean by “email security”
Disconnected email islands (groove)
Filtering
Sender-certified, or sender pays
“using s/mime” – easy for a mail client to verify a sig
No reason for large orgs not to send signed mail
Amazon has been using s/mime since 2003 for VAT tax notices
Problems with s/mime
Exformation
Kain, Smith and Asokan –
S/MIME doesn’t sign/encrypt subject
Lousy signature standard, but it’s what we have now
Survey about use - 417 completed, educated, wealthy of online merchants
93 european vs. 376 US
Results
Does your email handle encryption – most don’t know
Receiving digitally signed mail increases knowledge
Bank and online merchants should be signed
Tax returns, personal mail, etc
Banks should seal, as should tax returns
Don’t think that companies are better if they sign their messages
Smart enough to not believe it will be more truthful
Some feel important, but
Too complicated
Not worth the effort
No reason for them not to send out signed email
Them = large online merchants
Signed emails has to be visible,
Costs are trivial – class 1 cert
Q: Is bootstrapping hard?
A: Not a problem now – huge % of email clients are primed to receive digitally-signed messages
Q: Do people have to experience loss?
A: Phishing is loss. People just need to be aware of it.
Q: Will this fix everything?
A: No. But it will fix a large % of the issues.
Tangent: fixing part of the problem is something that engineers
Q: Do people distinguish between a signature and a digital signature
A: Yes. People are fairly smart, once they know about it at all
People don’t even think about content, they just think it verifies sender
SSL is relatively secure for data in transit
Not a high payoff for a lot of work
à attack the database
Need to keep the records encrypted in the database
BUT – perimeter security breaches are regular, and many attacks from insiders, devices can be stolen
Legislation demands data privacy – California Privacy Legislation, FISMA, Gramm-Leach-Bliley
Application level security
Encrypt before it’s put in the database
BUT – requires changing the existing applications
Database level encryption
Transparent to exisiting apps
BUT – difficult to program
Practice
Don’t want to encrypt the entire table, just the sensitive info
Encrypt the column
BUT – break exisiting attempts to read the data
Use triggers, which decrypts the data
Requirements
Only authorized users, but each server can enc/decrypt data
Network-attached encryption server
Keys for entire enterprise on a centralized server
Handles all encryption for organization
Different types of users handling different oiperations
Different permissions, rate of processing, times of day, etc
Q: What about indexed fields?
A: Those are tricky.
Q: How are the PKCS5 passwords generated
A: Those are just padding.
Q: Why did you choose the Ncipher card?
A: Legacy issues
Q: Can you talk a little about the role of identity management / authentication
A: Passwords, SSL with certs make our customers happy
Q: If I can steal your DB, does that mean I just have to steal your cert?
A: Yes, but it’s not an online attack, rather than an offline attack
Q: Why is there no encryption proxy – What is an
encryption proxy?
A: We’re working on it.
Ciphire mail is an email encryption tool for Windows, Linux and Mac
Why new technology?
Hard part of secure communication is key exchange
Web of trust
Normal users don’t understand not to trust un-introduced parties
Trusted third party (X.509, PKIX)
TTP is vulnerable, may not be trustworthy
Client is a transparent proxy between email client and server
No integration needed
Access with a central server for creation and download to server
No user involvement needed
Certificate is for an email address, not a person
Use ASN.1, based on X.509
Cert binds multiple public keys to an ID
Email messages are processed on the fly by the client, not the email user
User just has to learn what the tags will mean
Headers are encrypted
Each MIME part is sent separately
Employs a hybrid trust model
Hierarchical from the PKI trust model (issuer-driven)
Has some aspect of detection against central subversion
Client checks own certs, other certs, and compares summary hash values with each communication partner
Many users would be able to see if something had been changed
Ciphire fingerprint system
Q: Given that PGP has been deployed, is this compatible?
A: No.
Q: Are you X.500 or X.509 based?
A: We don’t use LDAP (Simson – why on earth have they
built everything themselves?)
Q: What about complex systems with 7 out of 8 mime parts signed?
A: Pop-ups
Q: Dispute resolution?
A: We’ve got a few things, but there are a few bits where you’re out of luck
Some key-revocation, but a little sketchy
Q: Log source IP s?
A: No, we’re using TTP
Q: Deniability is an issue?
A: No.
Q: Signed key helps me get the key to
Usability – can I accomplish the specified goal?
That goal is NOT just security?
Question: is reconciling security and usability impossible, or just difficult?
Security is a global issue
ATM fraud is growing
Fraud is becoming more international
Ebay has a larger economy than… (Kuwait?)
Default is convenience, not security
We’ve asked the banks to make things easier
People don’t develop mental models from scratch (trust)
They extend existing models
Parties involved
Attackers
Legal users of applications
Programmers
ISP owners
Company directors, etc
à all are making trade-off decisions
Tradeoffs
Legal users have a steeper cost/benefit range
Users have already been told that money is safe, so security is NOT a benefit!
Security is actually an added cost!
Percieved lack of trust imposes a social cost
Carrying things, remembering numbers
Remember the appropriate protocols
Minimize effort, maximize sociability
Fraudsters target those who is vulnerable to social pressure
Trust the banks, people around them
Question: are people unaware of the risks, or is the cost of risk lower than the cost of compliance?
Types of ATM fraud
Card trapping – use of a wire to trap the card in the machine, social engineering to get the pin
Card skimming – using an added-on device to get the magstrip info
It’s a spot the difference problem
Solns: jittering card reader, “this is what your reader should look like”,
“Clean your card” service – an open swiping mechanism
Fraudsters are maximizing as well
Time consuming to get PINs
à phishing
Phishing
An online training course on Business Finance, with a chapter on “how to transfer money”
Phishing factors
Spot the difference
Communication
Incentives
Shutting the doors
Allow the users to set their risks – hours of transactions, etc
Multiple forms of ID/authentication depending on value
PINs and ATMs
In 1998, 90% of ATMs did not have unique keys
Use a black-white sequence
Most users have many PINs
16% use same pine for all, 31% use the same PIN for some
Misconceptions
Miller’s magic numbers – chunking information
Images are recog’d because it uses multiple parts of the brain
Recalled in order, favoring beginning and end, depends on length
Distractors can have a larger effect over time
Biometrics – people’s intuitive behavior with biometrics does not yield good results
False reject rate is 20-30%
But – for
biometrics, you have immediate feedback of whether you did it properly
Security is not sexy for HCI
Need better user studies
People are the weakest link
Users are not your enemy, but people are their own worst enemy
Increasing public awareness can reduce use and confidence
Q: What about motion detectors for ATMs?
A: We prefer tools that detect initial state and any changes
Q: How do banks make their final decision?
A: Not a clear metric
Story: “if anything is different about this ATM” sign on the same day as a new security mech installed
A: People don’t want too well lit an ATM, or too dark
Info should be private, but the location should be public, for security from the public
A User-Friendly Approach to Human
Authentication of Messages - Jeff
King and Andre dos Santos
Can alice trust the smart card to sign the message?
She doesn’t have direct interaction with the smartcard
Problem: How can a human react with a (remote=non-interactive) Trusted Computing Base using an untrusted computing system?
-Use a trusted computing system
BUT: not available, security perimeter
-Directly interact with the TCB
BUT: extra hardware (cost), complexity makes tamper resistance harder
Hard AI problems – things the humans can do well, but computers can’t
CAPTCHA – completely automated turing test to tell computers and humans apart
KHAP – keyed hard AI problems
Need to find some way for Alice to extract a unique entity from the message from the TCB
AND the intervening computer should not be able to identity the unique key, so it can’t tamper with the message
3D keyed transformation
Use a ray-traced 3D image as a 2D image
Upon receiving the image, user can verify the image, read the text message
Attacker has to re-draw the scene to insert a new message
Attacks
Guess the scene
Redraw the 3D message
Looking for something that is easy for humans, hard for computers
It is a “pluggable” problem
Use speech KHAP, or handwriting KHAP
Human to TCB confirmation
Only one-bit is needed
BUT – adversary can interfere
Confirmation secrets, or use the KHAP
General approach
Security depends on AI problem parameters
Easy to use
Future work
Specific KHAP problems
Usability, security
Q: How much could you send via this message channel?
A: Max message length is 20 characters or so
Why biometric entity authentication
Entity authentication is significant source of fraud
Passwords are not enough
Need to include the “something you are” aspect
Adversary should not be able to recover biometric file from biometric
…
Suppose 99voters are tied between Blue and Red à flip a coin
Two voters prefer green, but like blue better than red
They have to manipulate to get their second choice
Plurality is bad for small parties
Change aggregation rules
Express all your preferences, remove the least favorable choice
Many other voting schemes
Scheme
N voters, m candidates
Each voter I has a preference pi
We can’t get around the Impossibility theorem so manipulation isn’t preventable, but we can make it hard
Some aggregations are NP-hard to manipulate
Rules may not actually reflect welfare goals
Want security against coalitions, not just individuals
Preround does pair-wise comparison to select the candidates on the ballot
BUT – pairing can be a source of manipulation
Use the ballots themselves as a source of randomness for order selection
Preround does not help for pre-round pairing
Open problems
Average-case hardness is hard to achieve using a pre-round
What is the maximum fraction of manipulators that we can allow
Q: Assumption of identical, equal weights of preference strength
Sealed bid, 1st or second-price auction
Efficient and private
Multi-unit auctions
M identitical units, each bidder bids for each item
Assumption: marginal decreasing valuations à tractable
Else, it is NP-hard
Pricing in a multi-unit action
Discriminatory: Pay what they bid
Uniform: All bidders pay same unit price as highest losing bid
Generalized Vickrey – winner pays sum of losing bids à efficient for eliciting preferences
Privacy and correctness
Confidentiality
Significance for current auction and future negotiation
Strategic
Bidders could doubt the correctness
…
System can compute prices for all three styles of pricing
Protocol
Distributed generation of keys
Publish encryption of bids
Jointly compute outcome vectors
Distributed decryption
Preferential voting (instant runoff)
T candidates, voters rank candidates
If there is no majority, person with lowest number of votes are eliminated
Second-place votes of emilinated person are shifted to
Problem: preferences are publicly revealed anonymously, but still identifiable
Can still get paid to vote with combinatorically
Frameworks:
Mixnets
Homo-morphic encryption
Soln1
Just use yes/no ballots, homomophically encrypted
BUT – requires too much interaction
Soln2
Put more state on the counters – encrypt rank of each candidates
Now the tallying process can just add E(-1)
BUT – hard to tally these efficiently
Private counters
N voters, t candidates
Use mixnet to
Keep track of whether a preference is in a range
In every round, remove the candidate’s counter from each ballot, and apply the other counters
Security of voting follows fro the counter construction
Stuart Stubblebine – Secure Distributed Human Computation
Large scale distributed computation is good, but it can’t solve everything
Lucky: NASA research displays pictures of martian craters, can pick the category
Allan: Think about Surowicki’s “Wisdom of Crowds”
Allan Friedman on “Rhetoric and the Public Policy of Security”
Avi Rubin on RFID
Moti on Scalable Public Key and Revocation
…
Three-party password-based authentication system
Use a single trusted server to establish a session key between three parties
Small password space (4 digit pin)
Vulnerable to dictionary attacks
Limit the attacks to online attacks only
Biometric authentication is simple
BUT – changing is difficult
Standard tools such as hashing don’t really work
Goal – a lightweight hash function
Smart-card based
No single point of failure: server or smartcard
Server compromise should not lead to the ability to impersonate user
i.e. a biometric PIN for banking system
Adversary model
Defined by resources
Cracked or uncracked smartcard
Fingerprint or eavesdropping
Security requirement
Confidentiality of fingerprint
Integrity – impersonation
Availability – prevention of use
Three oracles that all are compared with each other
Bad approaches
Sending fingerprint – not private
Sending hash of fingerprint – not correct
Sending fingerprint xor’d with a one-time bad – correct and somewhat private, but leaks information of data that changes
These
notes were recorded on the fly by Allan Friedman, and any omissions or
inaccuracies are purely his fault. To learn more about the papers here,
please see the proceedings (to be published by Spring-Verlag)
the IFCA website, or contact the authors
directly.
I
apologize for the horrendous formatting; I took notes on a windows box and was
lazy about dumping into HTML, so plenty of nasty artifacts remain.