Where is the salt on the OpenSSL AES encryption?

  • I'm interested in knowing how and where OpenSSL inserts the generated salt on an AES encrypted data. Why? Im encrypting data in Java classes and need to guarantee that I can use OpenSSL to decrypt them.

    For instance, let's say I have this encrypted base64 string, generated with the passphrase "abc":

    # generated with "openssl enc -aes-256-cbc -a"

    To decrypt it we can use:

    echo U2FsdGVkX1+tfvgUkjErP6j2kUAVwWZzNlaAmTqhzTk= | openssl enc -d -a -aes-256-cbc -p
    # enc -d
    #     decryption
    # -a 
    #     input is base64
    # -aes-256-cbc 
    #     the aes algorithm used in encryption
    # -p 
    #     print salt, key and iv params

    Running this using the "abc" passphrase will result in:

    iv =95A770DE9E0130E77C8E5D796D1B4EF5

    Now, we know that for AES to decrypt the data it needs the key and the Initialization Vector.

    In the case of OpenSSL, the manual says the key is generated from the passphrase and a salt, and the Initialization Vector is derived from the key itself (if not manually specified). That means that the generated data doesn't need to have the IV on it, but it does need to have the salt on it, or else the key for decryption will never be generated correctly.

    So, the point is, where's the salt and how's it inserted in the resulting data? Doing some basic analysis on the generated data (decoding from base64 and outputting the hex values) we can see that the salt is not prepended or appended to the resulting data, but somehow it is there:

    # salt: AD7EF81492312B3F
    echo U2FsdGVkX1+tfvgUkjErP6j2kUAVwWZzNlaAmTqhzTk= | openssl enc -d -base64 | od -x                                                                                                                                                                                         
    0000000 6153 746c 6465 5f5f 7ead 14f8 3192 3f2b
    0000020 f6a8 4091 c115 7366 5636 9980 a13a 39cd

    You can see that the salt "AD7E..." is not directly present in the encrypted data. Looks like some transformation occurred.

    It looks like the salt is switched pair by pair and inserted in the data, starting on byte #9. Is this a common practice or something that only OpenSSL implements?

    # salt:                     AD7E F814 9231 2B3F
    # switch pair by pair:      7EAD 14F8 3192 3F2B
    # data: 6153 746c 6465 5f5f 7ead 14f8 3192 3f2b f6a8 4091 c115 7366 5636 9980 a13a 39cd


    As Thomas Pornin stated, the problem here is that od -x outputs the raw data. As my computer is x86_64, the data is in little endian and the salt looks "swapped". I had forgotten how endianness is tricky. Now I will always remember to use od -t x1

    Anyway, I'm still interested in knowing if inserting the salt at the 9th byte is a common practice or an OpenSSL specific implementation. I also noticed that the first bytes are the characters Salted__

    Just a clarification, as I don't know the answer. In which scenario would you be using AES to encrypt data using a salt ? There was a discussion here : http://security.stackexchange.com/questions/10476/is-aes-encrypting-a-password-with-itself-more-secure-than-sha1 Are you trying to do something similar ?

    @Polaco, thank you for this information. I was looking for the same so I can use openssl command-line tool to decrypt the text encrypted by other means. Were you able to succesfully decrypt by prefixing 'Salted__+salt_used' to the Java encrypted stuff with openssl?

  • Yes, a transformation occurred: endianness...

    Look at the bytes 8 to 15: 7ead 14f8 3192 3f2b. That's your salt. It is a known quirk of od: it decodes data by 16-bit units, little-endian, then shows them "numerically", so this incurs an apparent byte swap.

    Use od -t x1 to get a nicer output.

    Edit: to answer your other question, what OpenSSL does is neither standard nor common practice; it is just "what OpenSSL has always done". It is not well documented.

    Thanks! All the work to analyse and write down this and the od default -x is little endian... why not big endian? legacy defaults?

    @Polaco Because your computer is little-endian. And od probably just loads the bytes directly into memory so it isn't even aware of the endianess.

License under CC-BY-SA with attribution

Content dated before 7/24/2021 11:53 AM