A Memory Exhaustion Attack Against the Steem Blockchain

in #software6 years ago
  • This article explains the security risk patched in steemd 0.20.7.
  • Using American Fuzzy Lop on a message parsing library contained in the Steem blockchain implementation found unexpectedly large memory usage.
  • The memory allocation point identified by AFL can be successfully turned into an exploitable denial-of-service attack, on Steem and other Graphene-based blockchains.

Introduction

In a previous article, we introduced fuzz testing with American Fuzzy Lop and showed how it could be applied to the JSON parsing library included in Steem, a “social blockchain.” The Steem backend is written in C++ and uses a separate introspection-driven marshalling and unmarshalling library for messages between peers in the Steem network.

This article will explain the process of using American Fuzzy Lop on the unmarshalling library, highlight a problem which AFL discovered, and show how this problem can be made into a practical denial-of-service attack against Steem.

Fuzz-Testing the fc::raw library

Steem’s “core messages” are defined as structures in libraries/net/include/graphene/net/core_message.hpp. A macro generates these C++ structures’ marshalling and unmarshalling code, using the FC (“Fast Compilation)” library’s raw module. An example message and macro are shown below:

  struct hello_message
  {
    static const core_message_type_enum type;

    std::string             user_agent;
    uint32_t                core_protocol_version;
    fc::ip::address         inbound_address;
    uint16_t                inbound_port;
    uint16_t                outbound_port;
    node_id_t               node_public_key;
    fc::ecc::compact_signature signed_shared_secret;
    fc::variant_object      user_data;
…
};


FC_REFLECT( graphene::net::hello_message, (user_agent)
                                    (core_protocol_version)
                                    (inbound_address)
                                    (inbound_port)
                                    (outbound_port)
                                    (node_public_key)
                                    (signed_shared_secret)
                                    (user_data) )

To fuzz-test this structure’s parsing, we again need a test harness, a simple program that reads data from standard input (or a file) and populates the message object with it. Because we are only testing a single message type, the test input does not include the message header or size, but sets them up so that they are always correct. The harness uses the same instrumented library set up in the previous article.

The message format lacks a checksum or other verification at the parsing level. If it did, we could ensure that the checksum was always correct, or disable the checksum for test purposes. (An attacker could presumably use correct checksum values, but having AFL try to guess them would be very inefficient.)

const int buf_size = MAX_MESSAGE_SIZE + 1024;
char buf[buf_size];

int main( int argc, char *argv[] ) {
  std::cin.read( buf, buf_size );
  if ( !std::cin.eof() ) {
    std::cout << "String too long.\n";
    return 1;
  }
  size_t size = std::cin.gcount();

  try {
    message m;
    // Message size and type will be filled in so that they match how
    // many bytes we actually filled, and so that the type matches the
    // decode function we're trying to use.
    //
    // m.size does *not* include the header, according to the original code..
    //
    // Data is padded on the wire but shortened to m.size before
    // demarshaling.
    m.size = fc::min( size, (size_t)MAX_MESSAGE_SIZE );
    m.data.resize( m.size );    
    std::copy( buf, buf + m.size, m.data.begin() );

    m.msg_type = graphene::net::hello_message_type;
    graphene::net::hello_message hello = m.as<graphene::net::hello_message>();    
  } catch ( fc::exception &e ) {
    std::cout << e.to_string() << "\n";
    return 1;
  } catch ( std::bad_alloc &e ) {
    std::cout << "Out of memory!";
    abort();
  }
  return 0; 

AFL benefits from having a valid input as an initial example. This was easily generated by creating a hello_message, filling in its fields, and converting it into a byte stream via the graphene::net::message object.

Crashes

AFL reported finding crashes after a few minutes of work, which proved to be the result of failed memory allocations. AFL deliberately sets the maximum address space for the test harness program to 50 MB (by default) to look for cases that create unusual amounts of dynamic memory. Steem limits the maximum size of the message to 2MB, not including the message header, but the test messages that triggered the memory allocation failures were much smaller.

One site that triggered an out-of-memory condition was this function:

    template<typename Stream> inline void unpack( Stream& s, std::vector<char>& value ) {
      unsigned_int size;
      fc::raw::unpack( s, size );
      FC_ASSERT( size.value < MAX_ARRAY_ALLOC_SIZE );
      value.resize(size.value);
      if( value.size() )
             s.read( value.data(), value.size() );
      }

MAX_ARRAY_ALLOC_SIZE is set to 10MB, permitting an array larger than what would actually fit in the 2MB message limit. The code resizes first, then detects end-of-message later while filling the array, via an exception from the stream.

Exploiting the Memory Allocation

The FC library provides a union-like type called variant. This member of the hello_message structure is a keyed collection of variants:

fc::variant_object user_data;

One of the types supported by variant is a vector of variants. A test showed that the library supported arbitrary nesting of vectors within a top-level variant_object:

    std::vector<fc::variant> va1( 0xaa );
    std::vector<fc::variant> va2( 0xbb );
    std::vector<fc::variant> va3( 0xcc );
    std::vector<fc::variant> va4( 0xdd );
    std::vector<fc::variant> va5( 0xee );
    fc::to_variant( va5, va4[0] );
    fc::to_variant( va4, va3[0] );
    fc::to_variant( va3, va2[0] );
    fc::to_variant( va2, va1[0] );

    fc::mutable_variant_object mut;
    mut.set( "aaaaaa", va1 );
    hm.user_data = mut;
    message m( hm ); 

On the wire, each nested array is represented by an object type (06 for array) and a length (packed to between one to four bytes), followed by the contents of the array.

00000000  0a 75 73 65 72 5f 61 67  65 6e 74 03 00 00 00 00  |.user_agent.....|
00000010  00 00 00 90 1f 7a 00 00  00 00 00 00 00 00 00 00  |.....z..........|
00000020  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
*
00000070  00 00 00 00 00 00 00 00  00 01 06 61 61 61 61 61  |...........aaaaa|
00000080  61 06 aa 01 06 bb 01 06  cc 01 06 dd 01 06 ee 01  |a...............|
             ^^ array_type
                ^^ ^^ packed length

00000090  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|

Thus, after a prelude which contains the fixed fields of the hello_message, we can create a variant_object which contains a max-length array A1. The first field of this array is also a max-length array, A2. The first field of A2 in turn contains another max-length array A3, etc. for as deeply as we can nest in the maximum message size. The fc::raw::unpack functions will recurse down the first element of each array, preallocating the specified array size each time.

The prelude shown here is 129 bytes, and each level of nesting requires four bytes to encode a maximum-sized array. This permits us to nest 131063 levels deep. Either the process will terminate due to stack overflow, or the total memory allocation will be 10MB per level, totaling 1.3 terabytes.

Analysis

Normally, the absence of hand-written demarshalling code is a good sign! Repetitive hand-written code frequently contains unnoticed errors and vulnerabilities.

In this case, though, the generality of the introspection-based demarshalling code is itself a vulnerability. The programmers did not mean to permit arbitrarily-deeply-nested structures, but it is difficult to forbid it once some level of recursion has been implemented. The check for a maximum array allocation is a good security measure; it forbids us from just asking for terabytes of memory all at once. But because the arrays can be recursively nested, the check is insufficient to protect against the attack described above.

It appears this vulnerability exists in the original Graphene implementation from which Steem is derived, so other blockchain implementations such as Bitshares are also vulnerable. The same hello_message appears in https://github.com/cryptonomex/graphene/blob/master/libraries/net/include/graphene/net/core_messages.hpp

The Steem development team was notified of this bug on December 3rd, and fixed the problem in releases 0.20.7 (December 10th) and 0.20.8 (December 14th).

The author attempted to submit a notice through Bitshares’ bounty website at hackthedex.io, but the form did not permit us to submit a bug without a Bitshares account, even while declining the bounty. An email to [email protected] was not answered.

We were not able to locate contact addresses for other Graphene-based blockchains, such as Golos or Smoke.io.

Fuzz.ai

Fuzz.ai is an early-stage startup dedicated to making software correctness tools easier to use. Fuzzers, model checkers, and property-based testing can make software more robust, expose security vulnerabilities, and speed development.

Sort:  

This story was recommended by Steeve to its users and upvoted by one or more of them.

Check @steeveapp to learn more about Steeve, an AI-powered Steem interface.

.

I haven't identified other good entry points for fuzzing yet; one of the things I'm building is tooling that will make it easier to do so and construct the harness automatically. There are also fuzzing tools specifically designed for testing network services which could be used, but whitebox testing is usually more efficient.

I hope to demonstrate other sorts of tools as well, so I might to a TLA+ model on part of the Steem design.

I've been looking at other software, including another blockchain in which I found bugs, but I haven't heard back from their bug bounty program yet so I'm giving them more time before publishing.

Thanks for sharing this well documented issue. And well done Steemit for the fast fix, I didn't have much reason to say positive things lately.

Posted using Steeve, an AI-powered Steem interface

This might explain why @ned and the @steemit team has been silent @exyle. They've been busy!

This post was upvoted by SteeveBot!

SteeveBot regularly upvotes stories that are appreciated by the community around Steeve, an AI-powered Steem interface.