An Experimental Study of the Influence of Dynamic Compiler Optimizations on Scala Performance

2018-12-01

Java Virtual Machines are optimized for code generated by the java compiler. Scala encourages a different programming style, so running Scala bytecode on the JVM is not as optimized.

Graal uses a profiling interpreter to collect information about the application code before compiling any methods, inculuding branch probabilities, type profiles, exception probabilities, and method invocation counters. The compiler uses this information to optimize the code it produces.

If any optimization assumption about application behaviour fails and the machine code is invalidated, the system continues execution in the interpreter. Deoptimization: transitioning from optimized code to the interpreter. The interpreter updates the profile so that the new application behaviour is incorporated into the next compilation.

The paper runs the Scala DaCapo and Java DaCapo benchmark suites on the GraalVM compiler and VM while disabling various compiler optimizations in order to see their effect on performance. Examples are type checked inlining, intrisifying methods, and loop refactoring.

Graal does significantly better on Scala DaCapo than on Java DaCapo for 2 reasons: Scala code contains more opportunity for optimization which Graal excels at; and Java DaCapo received a lot of performance tuning on the client and server compilers.

Moving Fast with Software Verification

2018-12-07

Modern technology companies like Facebook have a fast high pace iteration style of software development, which generally conflicts with the application of formal verification in industry.

The INFER tool is deployed in production at Facebook running on every code diff. It applies recent advances in automatic verification. Its underlying formalism is separation logic and employs a compositional variant of the RHS inter-procedural analysis algorithm.

Given a piece of code \(C\), INFER synthesises pre/post specifications in the form \(\{P\} C \{Q\}\) by inferring \(P\) and \(Q\). The specifications express memory safety rather than functional correctness.

Other static verification tools generally use full-program analysis which is too slow for fast-iteration software development. INFER’s incremental analysis is achieved by running a full program analysis nightly. When analysing a proposed code diff, it uses the cached results for parts of the program which did not change (possible due to compositional nature of INFER).

Social challenge to build trust in the tool: start small with Android resource leaks and null deferences.

The paper suggests that part of why industrial program analysis is underdeveloped is because academic research focuses heavily on whole-program analysis or specify-first styles, which limit the industrial use cases. Automatic formal verification will have great impact if we invest in compositional analyses.

Lessons from Building Static Analysis Tools at Google

2018-12-09

Google made an attempt to integrate the FindBugs project into their developer workflows, but ran into roadblocks. These lessons are then used to generate the current static analysis workflow used at Google.

Reasons engineers do not use static analysis tools or ignore warnings:

  • not integrated into their workflow
  • not actionable
  • not trustworthy in terms of false positives
  • no manifestation in production
  • too expensive too fix
  • warnings not understood

Make it a compiler workflow

The Clang team would implement a new compiler check along with the suggested fix. They then ran it on the entire Google codebase using ClangMR (MapReduce) and programatically fixed existing problem instances. Once the codebase is cleansed, the new check is enabled as a compiler error, which is difficult to disregard.

Compiler checks are displayed early in the development process and integrate into developer workflows. These checks should have a high bar, like easy to understand, actionable and easy to fix, produce no false positives, report correctness issues rather than style.

Code review checks

Criteria:

  • understandable
  • actionable and easy to fix
  • less than 10% false positive (enforced by user feedback of useful/not useful)
  • potential for significant code quality improvement

Lessons

  • finding bugs is easy, so simple bug detection tooling works
  • most developers will not go out of their way to use static analysis tools
  • developer happiness is key
  • don’t just find bugs, fix them
  • crowdsource analysis development

Formal reasoning about the security of Amazon Web Services

2018-12-09

AWS uses formal verification to increase security assurance of

  • cloud infrastructure
  • customers using cloud services

Formal verification is an investment AWS is making to help with the continuing growth and adoption of AWS.

Formal reasoning in securing the cloud

The security review processed use by the AWS security team increasingly uses more deductive theorem proving and/or symbolic model checking.

This can be applied to cryptography, hypervisors, BIOS, garbage collection, and network design. The verification tools must be used continuously to ensure that security assurances last for the entirety of a system’s lifecycle.

Formal reasoning in securing cloud custmers

AWS employees partner with customers to ensure customers’ functional and security needs are met.

AWS services like Config, Inspector, and S3 provide automated formal reasoning facilities to customers. For example, the S3 console will alert when a bucket policy is possibly misconfigured. The alert uses an SMT(satisfiability modulo theories)-based reasoning. A constraint solver reasons about questions of reachability in virtual networks built using AWS EC2.

Challenges

Challgenges of using formal verification include (more mentioned in the paper):

  • finding and fixing: problems that are found must have associated solutions
  • auditable proof artifacts
  • continuous formal verification