Manifest Security Project (nsf-0714649)



Yüklə 0,78 Mb.
tarix12.02.2017
ölçüsü0,78 Mb.
#8326



  • Manifest Security Project (NSF-0714649)

    • Penn: Benjamin Pierce, Stephanie Weirich
    • CMU: Karl Crary, Bob Harper, Frank Pfenning
  • CAREER: Language-based Distributed System Security (NSF-0311204)



Networked hosts

  • Networked hosts

  • Many applications running

  • Data resides at the hosts but can be shared over the network



Some data is confidential

  • Some data is confidential

  • Heterogeneous trust:

    • Not all hosts are trusted to the same degree
    • Different principals might have different opinions about trust


Policies are complex to specify and difficult to enforce

  • Policies are complex to specify and difficult to enforce

    • Many entities: hosts, programs, users, etc.
    • Software is large and complex
    • Heterogeneous trust implies decentralization of policies
  • Existing mechanisms are necessary…

    • Cryptography
    • Network protocols
    • OS / Firewalls, etc. level isolation
  • …but not sufficient

    • Must decrypt data to process it
    • Hard to regulate information flows through software
    • OS / Firewall access control and auditing is at the wrong level of abstraction


Use static analysis (e.g. type systems) and dynamic checks to enforce security properties at the language level.

  • Use static analysis (e.g. type systems) and dynamic checks to enforce security properties at the language level.

  • My own research [Zdancewic et al.]

    • Jif, secure-program partitioning, robust declassification, …
  • Authorization Logics & Proof Carrying Authorization:

    • ABLP 1993 [Abadi, Burrows, Lampson, Plotkin]
    • DCC [Abadi], linearity & time [Garg & Pfenning]
    • Trust Management [Blaze et al.]
  • Information flow:

    • Jif [Myers, et al.] , FlowCaml [Simonet, et al.] , …
  • Much work in the last decade:

  • ESC/Java, Spec# [Leino, et al.] Type Qualifiers [Foster, et al.] PoET/PSLang [Erlingson & Schneider] TAL, Cyclone [Morrisett, et al.] PCC, Ccured[Necula, et al.] xg++, metal [Engler, et al.], Fable [Hicks…]



Develop a security-oriented programming language that supports:

  • Develop a security-oriented programming language that supports:

    • Proof-carrying Authorization [Appel & Felton] [Bauer et al.]
    • Strong information-flow properties (as in Jif [Myers et al.] , FlowCaml [Pottier & Simonet])
  • Why?

    • Declarative policies (for access control & information flow)
    • Auditing & logging: proofs of authorization are informative
    • Good theoretical foundations
  • In this talk: A high-level tour of AURA's design

    • Focus on the authorization and audit components


AURA's programming model

  • AURA's programming model

  • Authorization logic

    • Examples
  • Programming in AURA

    • Dependent types
  • Status, future directions, conclusions



AURA is a type-safe functional programming language

  • AURA is a type-safe functional programming language

  • As in Java, C#, etc. AURA provides an interface to the OS resources

    • disk, network, memory, …
  • AURA is intended to be used for writing security-critical components



AURA security policies are expressed in an authorization logic

  • AURA security policies are expressed in an authorization logic

  • Applications can define their own policies

  • Language provides features for creating/manipulating proofs



Proofs are first class and they can depend on data

  • Proofs are first class and they can depend on data

  • Proof objects are capabilities needed to access resources protected by the runtime: AURA's type system ensures compliance

  • The runtime logs the proofs for later audit



For distributed systems, AURA also manages private keys

  • For distributed systems, AURA also manages private keys

  • Keys can create policy assertions sharable over the network

  • Connected to the policy by AURA's notion of principal



Connecting the contents of log entries to policy helps determine what to log.

  • Connecting the contents of log entries to policy helps determine what to log.



Connecting the contents of log entries to policy helps determine what to log.

  • Connecting the contents of log entries to policy helps determine what to log.

  • Proofs contain structure that can help administrators find flaws or misconfigurations in the policy.



Connecting the contents of log entries to policy helps determine what to log.

  • Connecting the contents of log entries to policy helps determine what to log.

  • Proofs contain structure that can help administrators find flaws or misconfigurations in the policy.

  • Reduced TCB: Typed interface forces code to provide auditable evidence.



AURA's programming model

  • AURA's programming model

  • Authorization logic

    • Examples
  • Programming in AURA

    • Dependent types
  • Status, future directions, conclusions



Policy propositions

  • Policy propositions

  •  ::= true

  • c

  • A says 

  •   

  •   

  •   

  • . 



P1: FS says (Owns A f1)

  • P1: FS says (Owns A f1)

  • P2: FS says (Owns B f2)

  • OwnerControlsRead: FS says o,r,f. (Owns o f)  (o says (MayRead r f))  (MayRead r f)

  • Might need to prove: FS says (MayRead A f1)

  • What are "Owns" and "f1"?



Authorization policies require application-specific constants:

  • Authorization policies require application-specific constants:

    • e.g. "MayRead B f" or "Owns A f"
    • There is no "proof evidence" associated with these constants
    • Otherwise, it would be easy to forge authorization proofs
  • But, principal A should be able to create a proof of A says (MayRead B f)

    • No justification required -- this is a matter of policy, not fact!
  • Decentralized implementation:

    • One proof that "A says T" is A's digital signature on a string "T"
    • written sign(A, "T")


P1: FS says (Owns A f1)

  • P1: FS says (Owns A f1)

  • OwnerControlsRead: FS says o,r,f. (Owns o f)  (o says (MayRead r f))  (MayRead r f)

  • Direct authorization via FS's signature: sign(FS, "MayRead A f1") : FS says (MayRead A f1)



P1: FS says (Owns A f1)

  • P1: FS says (Owns A f1)

  • OwnerControlsRead: FS says o,r,f. (Owns o f)  (o says (MayRead r f))  (MayRead r f)

  • Complex proof constructed using "bind" and "return" bind p = OwnerControlsRead in bind q = P1 in return FS (p A A f1 q sign(A,"MayRead A f1")))

  • : FS says (MayRead A f1)



How to create the value sign(A, "") ?

  • How to create the value sign(A, "") ?

  • Components of the software have authority

    • Authority modeled as possession of a private key
    • With A's authority : say("") evaluates to sign(A, "")
  • What 's should a program be able to say?

    • From a statically predetermined set (static auditing)
    • From a set determined at load time
  • In any case: log which assertions are made



T  P says T "Principals assert all true statements"

  • T  P says T "Principals assert all true statements"

  • (P says T)  (P says (T  U))  (P says U) "Principals' assertions are closed under deduction"

  • Define "P speaks-for Q" = . (P says )  (Q says )

  • (Q says (P speaks-for Q))  (P speaks-for Q) "Q can delegate its authority to P" (The "hand off" axiom)



It is not possible to prove false: False  .  "The logic is consistent"

  • It is not possible to prove false: False  .  "The logic is consistent"

  • Without sign, it's not possible to prove: P says False "Principals are consistent"

  • It is not possible to prove: .(A says )   "Just because A says it doesn't mean it's true"

  • If (Q = P) then there is no T such that: (Q says T)  P says False "Nothing Q can say can cause P to be inconsistent"



AURA's programming model

  • AURA's programming model

  • Authorization logic

    • Examples
  • Programming in AURA

    • Dependent types
  • Status, future directions, conclusions



  • Propositions: specify policy

  •  A says 

  • (  ) .T

  • (Owns A fh1)

  • Evidence: proofs/credentials

  • sign(A, "")

  • bind

  • P



Policy propositions can mention program data

  • Policy propositions can mention program data

    • E.g. "f1" is a file handle that can appear in a policy
    • AURA restricts dependency to first order data types
    • Disallows computation at the type level
  • Programming with dependent types:

    • {x:T; U(x)} dependent pair (x:T)  U(x) dependent functions
  • Invariant: sign only types

    • Computation can't depend on signatures
    • But, can use predicates: {x:int; A says Good(x)}


Type of the "native" read operation:

  • Type of the "native" read operation:

  • raw_read : FileHandle  String

  • AURA's runtime exposes it this way: read : (f:FileHandle)  RT says (OkToRead self f)  {ans:String; RT says (DidRead f ans)}

  • RT is a principal that represents the AURA runtime

  • OKtoRead and DidRead are "generic" policies

    • The application implements its own policies about when it is OKtoRead


(* assertions to do with FS access-control policy *)

  • (* assertions to do with FS access-control policy *)

  • assert Owns : Prin -> FileHandle -> Prop;

  • assert MayRead : Prin -> FileHandle -> Prop;

  • (* RT's delegation to FS *)

  • let del = say((f:FileHandle) -> (p:Prin) -> FS says (MayRead p f) -> OkToRead FS f)

  • (* FS code to handle a request *)

  • let handle (r:Request) =

  • match r with {

  • | readRequest q o f (x:o says MayRead q f) ->

  • match getOwner f with {

  • | Just {o':Owner; ownFS:FS says Own o' f} ->

  • if o = o' then

  • let FSproof = bind a = OwnerControlsRead in bind b = ownFS in return FS (a o q f b x))) in

  • let cap : OkToRead FS f = del f q FSproof

  • Just (read f cap)

  • else …

  • | Nothing -> …

  • | …



AURA's programming model

  • AURA's programming model

  • Authorization logic

    • Examples
  • Programming in AURA

    • Dependent types
  • Status, future directions, conclusions



AURA's core calculus:

  • AURA's core calculus:

    • Rich type system that supports dependent authorization policies, recursive types, etc., suitable for compiler intermediate language
    • Type system is specified using the Coq proof assistant
    • Correctness properties proved in Coq: Type soundness proof is (nearly) complete (~7000 loc)
  • Have implemented an interpreter in F#

  • Planning to compile AURA to Microsoft .NET platform

    • Proof representation / compatibility with C# and other .NET languages


This story seems just fine for integrity, but what about confidentiality?

  • This story seems just fine for integrity, but what about confidentiality?

    • We have many ideas about connecting to information-flow analysis
    • Is there an "encryption" analog to "signatures" interpretation?
  • Other future directions:

    • Revocation/expiration of signed objects?
    • (Dependent) Type inference? Proof inference?
    • Connection to program verification?
    • Correlate distributed logs?






What does the program do with the proofs?

  • What does the program do with the proofs?

  • More Logging!

    • They record justifications of why certain operations were permitted.
  • When do you do the logging?

    • Answer: As close to the use of the privileges as possible.
    • Easy for built-in security-relevant operations like file I/O.
    • Also provide a "log" operation for programmers to use explicitly.
  • Question: what theorem do you prove?

    • Correspondence between security-relevant operations and log entries.
    • Log entries should explain the observed behavior of the program.
  • Speculation: A theory of auditing?



Enforcing authorization policies in distributed systems is difficult

  • Enforcing authorization policies in distributed systems is difficult

    • Policies can become complex
    • Software that enforces the policies can be complex
  • Authorization Logics:

    • Abadi, Burrows, Lampson, Plotkin "ABLP" [TOPLAS 1993]
      • somewhat ad hoc w.r.t. delegation and negation
    • Garg & Pfenning [CSFW 2006, ESORICS 2006]
    • Becker,Gordon, Fournet [CSFW 2007]
  • Trust Management / KeyNote

    • Blaze et al. [Oakland 1996…]


A Core Calculus of Dependency [Abadi, Banerjee, Heintz, Riecke: POPL 1999]

  • A Core Calculus of Dependency [Abadi, Banerjee, Heintz, Riecke: POPL 1999]

    • Type system with lattice of "labels" TL
    • Key property: noninterference
    • Showed how to encode many dependency analyses: information flow, binding time analysis, slicing, etc.
  • Access control in a Core Calculus of Dependency [Abadi: ICFP 2006]

    • Essentially the same type system is an authorization logic
    • Instead of TL read the type as "L says T"
    • Curry-Howard isomorphism "programs are proofs"


assert Owns : Prin -> FileHandle -> Prop

  • assert Owns : Prin -> FileHandle -> Prop

  • data OwnerInfo : FileHandle -> Type {

  • | oinfo : (f:FileHandle) ->

  • (p:prin) ->

  • (pf (FS says (Owns p f))) ->

  • OwnerInfo f

  • }

  • getOwner : (f:FileHandle) ->

  • Maybe (OwnerInfo f)



assert OkToRead : FileHandle -> Prop

  • assert OkToRead : FileHandle -> Prop

  • assert HasContents: FileHandle -> String -> Prop

  • data FileData : FileHandle -> Type {

  • | fd : (f:FileData) ->

  • (d:String) ->

  • (pf (FS says (HasContents f d))) ->

  • FileData

  • }

  • read : (f:FileHandle) -> (pf (FS says (OkToRead f)) ->

  • (FileData f)



Cryptographic protocols provide authentication

  • Cryptographic protocols provide authentication

  • Encryption protects data on the network

  • OS / Firewalls provide coarse-grained isolation and protection



Data must be decrypted during computation

  • Data must be decrypted during computation

  • Encryption must be used consistently with the policy

  • Regulating information-flow through software is hard

  • Auditing is at the wrong level of abstraction (Firewall, OS)

  • Policy is usually expressed at the application level



Yüklə 0,78 Mb.

Dostları ilə paylaş:




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©azkurs.org 2022
rəhbərliyinə müraciət

    Ana səhifə