mass update

builder:
  kotlin 1.3.41
bootloader_message:
  reboot rescue
  reboot fastboot
mkbootimg
  update mkbootimg from AOSP master
  modify our header packer accordingly
avbtool:
  update from commit 9d3646515bf0b5f09d8bdbe0b844c7eefa0c0802
  Tue May 14 15:30:37 2019 -0400
remote java
pull/31/head
cfig 6 years ago
parent c3bb4fb356
commit 79b84baf68
No known key found for this signature in database
GPG Key ID: B104C307F0FDABB7

@ -1,13 +0,0 @@
t:
external/extract_kernel.py \
--input build/unzip_boot/kernel \
--output-configs kernel_configs.txt \
--output-version kernel_version.txt
t2:
rm -fr dtbo
mkdir dtbo
external/mkdtboimg.py \
dump dtbo.img \
--dtb dtbo/dtb.dump \
--output dtbo/header.dump

@ -2,7 +2,7 @@
[![Build Status](https://travis-ci.org/cfig/Android_boot_image_editor.svg?branch=master)](https://travis-ci.org/cfig/Android_boot_image_editor) [![Build Status](https://travis-ci.org/cfig/Android_boot_image_editor.svg?branch=master)](https://travis-ci.org/cfig/Android_boot_image_editor)
[![License](http://img.shields.io/:license-apache-blue.svg?style=flat-square)](http://www.apache.org/licenses/LICENSE-2.0.html) [![License](http://img.shields.io/:license-apache-blue.svg?style=flat-square)](http://www.apache.org/licenses/LICENSE-2.0.html)
This tool focuses on editing Android boot.img(also recovery.img, recovery-two-step.img and vbmeta.img). This tool focuses on editing Android boot.img(also recovery.img, and vbmeta.img).
## 1. Prerequisite ## 1. Prerequisite
#### 1.1 Host OS requirement: #### 1.1 Host OS requirement:
@ -12,13 +12,13 @@ Also need python 2.x and jdk 8.
#### 1.2 Target Android requirement: #### 1.2 Target Android requirement:
(1) Target boot.img MUST follows AOSP verified boot flow, either [Boot image signature](https://source.android.com/security/verifiedboot/verified-boot#signature_format) in VBoot 1.0 or [AVB HASH footer](https://android.googlesource.com/platform/external/avb/+/master/README.md#The-VBMeta-struct) in VBoot 2.0. (1) Target boot.img MUST follows AOSP verified boot flow, either [Boot image signature](https://source.android.com/security/verifiedboot/verified-boot#signature_format) in VBoot 1.0 or [AVB HASH footer](https://android.googlesource.com/platform/external/avb/+/master/README.md#The-VBMeta-struct) (a.k.a. AVB) in VBoot 2.0.
Supported images: Supported images:
- boot.img - boot.img
- recovery.img - recovery.img (also recovery-two-step.img)
- recovery-two-step.img - vbmeta.img (also vbmeta\_system.img, vbmeta\_vendor.img etc.)
- vbmeta.img - dtbo.img (only 'unpack' is supported)
(2) These utilities are known to work for Nexus/Pixel boot.img for the following Android releases: (2) These utilities are known to work for Nexus/Pixel boot.img for the following Android releases:

122
avb/avbtool vendored

@ -1121,7 +1121,7 @@ class AvbDescriptor(object):
return bytearray(ret) return bytearray(ret)
def verify(self, image_dir, image_ext, expected_chain_partitions_map, def verify(self, image_dir, image_ext, expected_chain_partitions_map,
image_containing_descriptor): image_containing_descriptor, accept_zeroed_hashtree):
"""Verifies contents of the descriptor - used in verify_image sub-command. """Verifies contents of the descriptor - used in verify_image sub-command.
Arguments: Arguments:
@ -1130,6 +1130,7 @@ class AvbDescriptor(object):
expected_chain_partitions_map: A map from partition name to the expected_chain_partitions_map: A map from partition name to the
tuple (rollback_index_location, key_blob). tuple (rollback_index_location, key_blob).
image_containing_descriptor: The image the descriptor is in. image_containing_descriptor: The image the descriptor is in.
accept_zeroed_hashtree: If True, don't fail if hashtree or FEC data is zeroed out.
Returns: Returns:
True if the descriptor verifies, False otherwise. True if the descriptor verifies, False otherwise.
@ -1207,7 +1208,7 @@ class AvbPropertyDescriptor(AvbDescriptor):
return bytearray(ret) return bytearray(ret)
def verify(self, image_dir, image_ext, expected_chain_partitions_map, def verify(self, image_dir, image_ext, expected_chain_partitions_map,
image_containing_descriptor): image_containing_descriptor, accept_zeroed_hashtree):
"""Verifies contents of the descriptor - used in verify_image sub-command. """Verifies contents of the descriptor - used in verify_image sub-command.
Arguments: Arguments:
@ -1216,6 +1217,7 @@ class AvbPropertyDescriptor(AvbDescriptor):
expected_chain_partitions_map: A map from partition name to the expected_chain_partitions_map: A map from partition name to the
tuple (rollback_index_location, key_blob). tuple (rollback_index_location, key_blob).
image_containing_descriptor: The image the descriptor is in. image_containing_descriptor: The image the descriptor is in.
accept_zeroed_hashtree: If True, don't fail if hashtree or FEC data is zeroed out.
Returns: Returns:
True if the descriptor verifies, False otherwise. True if the descriptor verifies, False otherwise.
@ -1369,7 +1371,7 @@ class AvbHashtreeDescriptor(AvbDescriptor):
return bytearray(ret) return bytearray(ret)
def verify(self, image_dir, image_ext, expected_chain_partitions_map, def verify(self, image_dir, image_ext, expected_chain_partitions_map,
image_containing_descriptor): image_containing_descriptor, accept_zeroed_hashtree):
"""Verifies contents of the descriptor - used in verify_image sub-command. """Verifies contents of the descriptor - used in verify_image sub-command.
Arguments: Arguments:
@ -1378,6 +1380,7 @@ class AvbHashtreeDescriptor(AvbDescriptor):
expected_chain_partitions_map: A map from partition name to the expected_chain_partitions_map: A map from partition name to the
tuple (rollback_index_location, key_blob). tuple (rollback_index_location, key_blob).
image_containing_descriptor: The image the descriptor is in. image_containing_descriptor: The image the descriptor is in.
accept_zeroed_hashtree: If True, don't fail if hashtree or FEC data is zeroed out.
Returns: Returns:
True if the descriptor verifies, False otherwise. True if the descriptor verifies, False otherwise.
@ -1406,17 +1409,22 @@ class AvbHashtreeDescriptor(AvbDescriptor):
# ... also check that the on-disk hashtree matches # ... also check that the on-disk hashtree matches
image.seek(self.tree_offset) image.seek(self.tree_offset)
hash_tree_ondisk = image.read(self.tree_size) hash_tree_ondisk = image.read(self.tree_size)
if hash_tree != hash_tree_ondisk: is_zeroed = (hash_tree_ondisk[0:8] == 'ZeRoHaSH')
sys.stderr.write('hashtree of {} contains invalid data\n'. if is_zeroed and accept_zeroed_hashtree:
print ('{}: skipping verification since hashtree is zeroed and --accept_zeroed_hashtree was given'
.format(self.partition_name))
else:
if hash_tree != hash_tree_ondisk:
sys.stderr.write('hashtree of {} contains invalid data\n'.
format(image_filename)) format(image_filename))
return False return False
print ('{}: Successfully verified {} hashtree of {} for image of {} bytes'
.format(self.partition_name, self.hash_algorithm, image.filename,
self.image_size))
# TODO: we could also verify that the FEC stored in the image is # TODO: we could also verify that the FEC stored in the image is
# correct but this a) currently requires the 'fec' binary; and b) # correct but this a) currently requires the 'fec' binary; and b)
# takes a long time; and c) is not strictly needed for # takes a long time; and c) is not strictly needed for
# verification purposes as we've already verified the root hash. # verification purposes as we've already verified the root hash.
print ('{}: Successfully verified {} hashtree of {} for image of {} bytes'
.format(self.partition_name, self.hash_algorithm, image.filename,
self.image_size))
return True return True
@ -1526,7 +1534,7 @@ class AvbHashDescriptor(AvbDescriptor):
return bytearray(ret) return bytearray(ret)
def verify(self, image_dir, image_ext, expected_chain_partitions_map, def verify(self, image_dir, image_ext, expected_chain_partitions_map,
image_containing_descriptor): image_containing_descriptor, accept_zeroed_hashtree):
"""Verifies contents of the descriptor - used in verify_image sub-command. """Verifies contents of the descriptor - used in verify_image sub-command.
Arguments: Arguments:
@ -1535,6 +1543,7 @@ class AvbHashDescriptor(AvbDescriptor):
expected_chain_partitions_map: A map from partition name to the expected_chain_partitions_map: A map from partition name to the
tuple (rollback_index_location, key_blob). tuple (rollback_index_location, key_blob).
image_containing_descriptor: The image the descriptor is in. image_containing_descriptor: The image the descriptor is in.
accept_zeroed_hashtree: If True, don't fail if hashtree or FEC data is zeroed out.
Returns: Returns:
True if the descriptor verifies, False otherwise. True if the descriptor verifies, False otherwise.
@ -1636,7 +1645,7 @@ class AvbKernelCmdlineDescriptor(AvbDescriptor):
return bytearray(ret) return bytearray(ret)
def verify(self, image_dir, image_ext, expected_chain_partitions_map, def verify(self, image_dir, image_ext, expected_chain_partitions_map,
image_containing_descriptor): image_containing_descriptor, accept_zeroed_hashtree):
"""Verifies contents of the descriptor - used in verify_image sub-command. """Verifies contents of the descriptor - used in verify_image sub-command.
Arguments: Arguments:
@ -1645,6 +1654,7 @@ class AvbKernelCmdlineDescriptor(AvbDescriptor):
expected_chain_partitions_map: A map from partition name to the expected_chain_partitions_map: A map from partition name to the
tuple (rollback_index_location, key_blob). tuple (rollback_index_location, key_blob).
image_containing_descriptor: The image the descriptor is in. image_containing_descriptor: The image the descriptor is in.
accept_zeroed_hashtree: If True, don't fail if hashtree or FEC data is zeroed out.
Returns: Returns:
True if the descriptor verifies, False otherwise. True if the descriptor verifies, False otherwise.
@ -1739,7 +1749,7 @@ class AvbChainPartitionDescriptor(AvbDescriptor):
return bytearray(ret) return bytearray(ret)
def verify(self, image_dir, image_ext, expected_chain_partitions_map, def verify(self, image_dir, image_ext, expected_chain_partitions_map,
image_containing_descriptor): image_containing_descriptor, accept_zeroed_hashtree):
"""Verifies contents of the descriptor - used in verify_image sub-command. """Verifies contents of the descriptor - used in verify_image sub-command.
Arguments: Arguments:
@ -1748,6 +1758,7 @@ class AvbChainPartitionDescriptor(AvbDescriptor):
expected_chain_partitions_map: A map from partition name to the expected_chain_partitions_map: A map from partition name to the
tuple (rollback_index_location, key_blob). tuple (rollback_index_location, key_blob).
image_containing_descriptor: The image the descriptor is in. image_containing_descriptor: The image the descriptor is in.
accept_zeroed_hashtree: If True, don't fail if hashtree or FEC data is zeroed out.
Returns: Returns:
True if the descriptor verifies, False otherwise. True if the descriptor verifies, False otherwise.
@ -2086,6 +2097,63 @@ class Avb(object):
# And cut... # And cut...
image.truncate(new_image_size) image.truncate(new_image_size)
def zero_hashtree(self, image_filename):
"""Implements the 'zero_hashtree' command.
Arguments:
image_filename: File to zero hashtree and FEC data from.
Raises:
AvbError: If there's no footer in the image.
"""
image = ImageHandler(image_filename)
(footer, _, descriptors, _) = self._parse_image(image)
if not footer:
raise AvbError('Given image does not have a footer.')
# Search for a hashtree descriptor to figure out the location and
# size of the hashtree and FEC.
ht_desc = None
for desc in descriptors:
if isinstance(desc, AvbHashtreeDescriptor):
ht_desc = desc
break
if not ht_desc:
raise AvbError('No hashtree descriptor was found.')
zero_ht_start_offset = ht_desc.tree_offset
zero_ht_num_bytes = ht_desc.tree_size
zero_fec_start_offset = None
zero_fec_num_bytes = 0
if ht_desc.fec_offset > 0:
if ht_desc.fec_offset != ht_desc.tree_offset + ht_desc.tree_size:
raise AvbError('Hash-tree and FEC data must be adjacent.')
zero_fec_start_offset = ht_desc.fec_offset
zero_fec_num_bytes = ht_desc.fec_size
zero_end_offset = zero_ht_start_offset + zero_ht_num_bytes + zero_fec_num_bytes
image.seek(zero_end_offset)
data = image.read(image.image_size - zero_end_offset)
# Write zeroes all over hashtree and FEC, except for the first eight bytes
# where a magic marker - ZeroHaSH - is placed. Place these markers in the
# beginning of both hashtree and FEC. (That way, in the future we can add
# options to 'avbtool zero_hashtree' so as to zero out only either/or.)
#
# Applications can use these markers to detect that the hashtree and/or
# FEC needs to be recomputed.
image.truncate(zero_ht_start_offset)
data_zeroed_firstblock = 'ZeRoHaSH' + '\0'*(image.block_size - 8)
image.append_raw(data_zeroed_firstblock)
image.append_fill('\0\0\0\0', zero_ht_num_bytes - image.block_size)
if zero_fec_start_offset:
image.append_raw(data_zeroed_firstblock)
image.append_fill('\0\0\0\0', zero_fec_num_bytes - image.block_size)
image.append_raw(data)
def resize_image(self, image_filename, partition_size): def resize_image(self, image_filename, partition_size):
"""Implements the 'resize_image' command. """Implements the 'resize_image' command.
@ -2220,7 +2288,8 @@ class Avb(object):
if num_printed == 0: if num_printed == 0:
o.write(' (none)\n') o.write(' (none)\n')
def verify_image(self, image_filename, key_path, expected_chain_partitions, follow_chain_partitions): def verify_image(self, image_filename, key_path, expected_chain_partitions, follow_chain_partitions,
accept_zeroed_hashtree):
"""Implements the 'verify_image' command. """Implements the 'verify_image' command.
Arguments: Arguments:
@ -2229,6 +2298,7 @@ class Avb(object):
expected_chain_partitions: List of chain partitions to check or None. expected_chain_partitions: List of chain partitions to check or None.
follow_chain_partitions: If True, will follows chain partitions even when not follow_chain_partitions: If True, will follows chain partitions even when not
specified with the --expected_chain_partition option specified with the --expected_chain_partition option
accept_zeroed_hashtree: If True, don't fail if hashtree or FEC data is zeroed out.
""" """
expected_chain_partitions_map = {} expected_chain_partitions_map = {}
if expected_chain_partitions: if expected_chain_partitions:
@ -2244,8 +2314,7 @@ class Avb(object):
expected_chain_partitions_map[partition_name] = (rollback_index_location, pk_blob) expected_chain_partitions_map[partition_name] = (rollback_index_location, pk_blob)
image_dir = os.path.dirname(image_filename) image_dir = os.path.dirname(image_filename)
#image_ext = os.path.splitext(image_filename)[1] image_ext = os.path.splitext(image_filename)[1]
image_ext = image_filename[image_filename.index('.'):]
key_blob = None key_blob = None
if key_path: if key_path:
@ -2295,13 +2364,14 @@ class Avb(object):
.format(desc.partition_name, desc.rollback_index_location, .format(desc.partition_name, desc.rollback_index_location,
hashlib.sha1(desc.public_key).hexdigest())) hashlib.sha1(desc.public_key).hexdigest()))
else: else:
if not desc.verify(image_dir, image_ext, expected_chain_partitions_map, image): if not desc.verify(image_dir, image_ext, expected_chain_partitions_map, image,
accept_zeroed_hashtree):
raise AvbError('Error verifying descriptor.') raise AvbError('Error verifying descriptor.')
# Honor --follow_chain_partitions - add '--' to make the output more readable. # Honor --follow_chain_partitions - add '--' to make the output more readable.
if isinstance(desc, AvbChainPartitionDescriptor) and follow_chain_partitions: if isinstance(desc, AvbChainPartitionDescriptor) and follow_chain_partitions:
print '--' print '--'
chained_image_filename = os.path.join(image_dir, desc.partition_name + image_ext) chained_image_filename = os.path.join(image_dir, desc.partition_name + image_ext)
self.verify_image(chained_image_filename, key_path, None, False) self.verify_image(chained_image_filename, key_path, None, False, accept_zeroed_hashtree)
def calculate_vbmeta_digest(self, image_filename, hash_algorithm, output): def calculate_vbmeta_digest(self, image_filename, hash_algorithm, output):
@ -4027,6 +4097,14 @@ class AvbTool(object):
action='store_true') action='store_true')
sub_parser.set_defaults(func=self.erase_footer) sub_parser.set_defaults(func=self.erase_footer)
sub_parser = subparsers.add_parser('zero_hashtree',
help='Zero out hashtree and FEC data.')
sub_parser.add_argument('--image',
help='Image with a footer',
type=argparse.FileType('rwb+'),
required=True)
sub_parser.set_defaults(func=self.zero_hashtree)
sub_parser = subparsers.add_parser('extract_vbmeta_image', sub_parser = subparsers.add_parser('extract_vbmeta_image',
help='Extracts vbmeta from an image with a footer.') help='Extracts vbmeta from an image with a footer.')
sub_parser.add_argument('--image', sub_parser.add_argument('--image',
@ -4087,6 +4165,9 @@ class AvbTool(object):
help=('Follows chain partitions even when not ' help=('Follows chain partitions even when not '
'specified with the --expected_chain_partition option'), 'specified with the --expected_chain_partition option'),
action='store_true') action='store_true')
sub_parser.add_argument('--accept_zeroed_hashtree',
help=('Accept images where the hashtree or FEC data is zeroed out'),
action='store_true')
sub_parser.set_defaults(func=self.verify_image) sub_parser.set_defaults(func=self.verify_image)
sub_parser = subparsers.add_parser( sub_parser = subparsers.add_parser(
@ -4348,6 +4429,10 @@ class AvbTool(object):
"""Implements the 'erase_footer' sub-command.""" """Implements the 'erase_footer' sub-command."""
self.avb.erase_footer(args.image.name, args.keep_hashtree) self.avb.erase_footer(args.image.name, args.keep_hashtree)
def zero_hashtree(self, args):
"""Implements the 'zero_hashtree' sub-command."""
self.avb.zero_hashtree(args.image.name)
def extract_vbmeta_image(self, args): def extract_vbmeta_image(self, args):
"""Implements the 'extract_vbmeta_image' sub-command.""" """Implements the 'extract_vbmeta_image' sub-command."""
self.avb.extract_vbmeta_image(args.output, args.image.name, self.avb.extract_vbmeta_image(args.output, args.image.name,
@ -4369,7 +4454,8 @@ class AvbTool(object):
"""Implements the 'verify_image' sub-command.""" """Implements the 'verify_image' sub-command."""
self.avb.verify_image(args.image.name, args.key, self.avb.verify_image(args.image.name, args.key,
args.expected_chain_partition, args.expected_chain_partition,
args.follow_chain_partitions) args.follow_chain_partitions,
args.accept_zeroed_hashtree)
def calculate_vbmeta_digest(self, args): def calculate_vbmeta_digest(self, args):
"""Implements the 'calculate_vbmeta_digest' sub-command.""" """Implements the 'calculate_vbmeta_digest' sub-command."""

@ -1,6 +1,6 @@
buildscript { buildscript {
ext { ext {
kotlinVersion = "1.3.30" kotlinVersion = "1.3.41"
} }
repositories { repositories {
mavenCentral() mavenCentral()
@ -11,7 +11,6 @@ buildscript {
} }
} }
apply plugin: "java"
apply plugin: "kotlin" apply plugin: "kotlin"
apply plugin: "application" apply plugin: "application"

@ -1,367 +0,0 @@
package cfig.io;
import cfig.Helper;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.io.InputStream;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
import static org.junit.Assert.assertEquals;
public class Struct {
private static Logger log = LoggerFactory.getLogger(Struct.class);
private ByteOrder byteOrder = ByteOrder.LITTLE_ENDIAN;
private List<Object[]> formats = new ArrayList<>();
public Struct(String formatString) {
Matcher m = Pattern.compile("(\\d*)([a-zA-Z])").matcher(formatString);
if (formatString.startsWith(">") || formatString.startsWith("!")) {
this.byteOrder = ByteOrder.BIG_ENDIAN;
log.debug("Parsing BIG_ENDIAN format: " + formatString);
} else if (formatString.startsWith("@") || formatString.startsWith("=")) {
this.byteOrder = ByteOrder.nativeOrder();
log.debug("Parsing native ENDIAN format: " + formatString);
} else {
log.debug("Parsing LITTLE_ENDIAN format: " + formatString);
}
while (m.find()) {
boolean bExpand = true;
int mul = 1;
if (!m.group(1).isEmpty()) {
mul = Integer.decode(m.group(1));
}
//item[0]: Type, item[1]: multiple
// if need to expand format items, explode it
// eg: "4L" will be exploded to "1L 1L 1L 1L"
// eg: "10x" won't be exploded, it's still "10x"
Object item[] = new Object[2];
switch (m.group(2)) {
case "x": {//byte 1
item[0] = PadByte.class;
bExpand = false;
break;
}
case "b": {//byte 1
item[0] = Byte.class;
bExpand = false;
break;
}
case "s": {//python: char 1
item[0] = Character.class;
bExpand = false;
break;
}
case "h": {//2
item[0] = Short.class;
break;
}
case "H": {//2
item[0] = UnsignedShort.class;
break;
}
case "i":
case "l": {//4
item[0] = Integer.class;
break;
}
case "I":
case "L": {//4
item[0] = UnsignedInt.class;
break;
}
case "q": {//8
item[0] = Long.class;
break;
}
case "Q": {//8
item[0] = UnsignedLong.class;
break;
}
default: {
throw new IllegalArgumentException("type [" + m.group(2) + "] not supported");
}
}
if (bExpand) {
item[1] = 1;
for (int i = 0; i < mul; i++) {
formats.add(item);
}
} else {
item[1] = mul;
formats.add(item);
}
}
}
public Integer calcSize() {
int ret = 0;
for (Object[] format : formats) {
if (format[0] == Byte.class || format[0] == Character.class || format[0] == PadByte.class) {
ret += (int) format[1];
continue;
}
if (format[0] == Short.class) {
ret += 2 * (int) format[1];
continue;
}
if (format[0] == UnsignedShort.class) {
ret += 2 * (int) format[1];
continue;
}
if (format[0] == Integer.class) {
ret += 4 * (int) format[1];
continue;
}
if (format[0] == UnsignedInt.class) {
ret += 4 * (int) format[1];
continue;
}
if (format[0] == Long.class || format[0] == UnsignedLong.class) {
ret += 8 * (int) format[1];
continue;
}
throw new IllegalArgumentException("Class [" + format[0] + "] not supported");
}
return ret;
}
public void dump() {
log.info("--- Format ---");
log.info("Endian: " + this.byteOrder);
for (Object[] formatItem : formats) {
log.info(formatItem[0] + ":" + formatItem[1]);
}
log.info("--- Format ---");
}
public List unpack(InputStream iS) throws IOException {
List<Object> ret = new ArrayList<>();
ByteBuffer bf = ByteBuffer.allocate(32);
bf.order(this.byteOrder);
for (Object[] format : this.formats) {
//return 'null' for padding bytes
if (format[0] == PadByte.class) {
long skipped = iS.skip((Integer) format[1]);
assertEquals((long) (Integer) format[1], skipped);
ret.add(null);
continue;
}
if (format[0] == Byte.class || format[0] == Character.class) {
byte[] data = new byte[(Integer) format[1]];
assertEquals((int) format[1], iS.read(data));
ret.add(data);
continue;
}
if (format[0] == Short.class) {
byte[] data = new byte[2];
assertEquals(2, iS.read(data));
bf.clear();
bf.put(data);
bf.flip();
ret.add(bf.getShort());
continue;
}
if (format[0] == UnsignedShort.class) {
byte[] data = new byte[2];
assertEquals(2, iS.read(data));
log.debug("UnsignedShort: " + Helper.Companion.toHexString(data));
bf.clear();
if (this.byteOrder == ByteOrder.LITTLE_ENDIAN) {
bf.put(data);
bf.put(new byte[2]); //complete high bits with 0
} else {
bf.put(new byte[2]); //complete high bits with 0
bf.put(data);
}
bf.flip();
ret.add(bf.getInt());
continue;
}
if (format[0] == Integer.class) {
byte[] data = new byte[4];
assertEquals(4, iS.read(data));
log.debug("Integer: " + Helper.Companion.toHexString(data));
bf.clear();
bf.put(data);
bf.flip();
ret.add(bf.getInt());
continue;
}
if (format[0] == UnsignedInt.class) {
byte[] data = new byte[4];
assertEquals(4, iS.read(data));
bf.clear();
log.debug("UnsignedInt: " + Helper.Companion.toHexString(data));
if (this.byteOrder == ByteOrder.LITTLE_ENDIAN) {
bf.put(data);
bf.put(new byte[4]); //complete high bits with 0
} else {
bf.put(new byte[4]); //complete high bits with 0
bf.put(data);
}
bf.flip();
ret.add(bf.getLong());
continue;
}
//TODO: maybe exceeds limits of Long.class ?
if (format[0] == Long.class || format[0] == UnsignedLong.class) {
byte[] data = new byte[8];
assertEquals(8, iS.read(data));
bf.clear();
bf.put(data);
bf.flip();
ret.add(bf.getLong());
continue;
}
throw new IllegalArgumentException("Class [" + format[0] + "] not supported");
}
return ret;
}
public byte[] pack(Object... args) {
if (args.length != this.formats.size()) {
throw new IllegalArgumentException("argument size " + args.length +
" doesn't match format size " + this.formats.size());
}
ByteBuffer bf = ByteBuffer.allocate(this.calcSize());
bf.order(this.byteOrder);
for (int i = 0; i < args.length; i++) {
Object arg = args[i];
Class<?> format = (Class<?>) formats.get(i)[0];
int size = (int) formats.get(i)[1];
log.debug("Index[" + i + "], fmt = " + format + ", arg = " + arg + ", multi = " + size);
//padding
if (format == PadByte.class) {
byte b[] = new byte[size];
if (arg == null) {
Arrays.fill(b, (byte) 0);
} else if (arg instanceof Byte) {
Arrays.fill(b, (byte) arg);
} else if (arg instanceof Integer) {
Arrays.fill(b, ((Integer) arg).byteValue());
} else {
throw new IllegalArgumentException("Index[" + i + "] Unsupported arg [" + arg + "] with type [" + format + "]");
}
bf.put(b);
continue;
}
//signed byte
if (arg instanceof byte[]) {
bf.put((byte[]) arg);
int paddingSize = size - ((byte[]) arg).length;
if (0 < paddingSize) {
byte padBytes[] = new byte[size - ((byte[]) arg).length];
Arrays.fill(padBytes, (byte) 0);
bf.put(padBytes);
} else if (0 > paddingSize) {
log.error("container size " + size + ", value size " + ((byte[]) arg).length);
throw new IllegalArgumentException("Index[" + i + "] arg [" + arg + "] with type [" + format + "] size overflow");
} else {
log.debug("perfect match, paddingSize is zero");
}
continue;
}
//unsigned byte
if (arg instanceof int[] && format == Byte.class) {
for (int v : (int[]) arg) {
if (v > 255 || v < 0) {
throw new IllegalArgumentException("Index[" + i + "] Unsupported [int array] arg [" + arg + "] with type [" + format + "]");
}
bf.put((byte) v);
}
continue;
}
if (arg instanceof Short) {
bf.putShort((short) arg);
continue;
}
if (arg instanceof Integer) {
if (format == Integer.class) {
bf.putInt((int) arg);
} else if (format == UnsignedShort.class) {
ByteBuffer bf2 = ByteBuffer.allocate(4);
bf2.order(this.byteOrder);
bf2.putInt((int) arg);
bf2.flip();
if (this.byteOrder == ByteOrder.LITTLE_ENDIAN) {//LE
bf.putShort(bf2.getShort());
bf2.getShort();//discard
} else {//BE
bf2.getShort();//discard
bf.putShort(bf2.getShort());
}
} else if (format == UnsignedInt.class) {
if ((Integer) arg < 0) {
throw new IllegalArgumentException("Index[" + i + "] Unsupported [Integer] arg [" + arg + "] with type [" + format + "]");
}
bf.putInt((int) arg);
} else {
throw new IllegalArgumentException("Index[" + i + "] Unsupported [Integer] arg [" + arg + "] with type [" + format + "]");
}
continue;
}
if (arg instanceof Long) {
//XXX: maybe run into issue if we meet REAL Unsigned Long
if (format == Long.class || format == UnsignedLong.class) {
bf.putLong((long) arg);
} else if (format == UnsignedInt.class) {
if ((Long) arg < 0L || (Long) arg > (Integer.MAX_VALUE * 2L + 1)) {
throw new IllegalArgumentException("Index[" + i + "] Unsupported [Long] arg [" + arg + "] with type [" + format + "]");
}
ByteBuffer bf2 = ByteBuffer.allocate(8);
bf2.order(this.byteOrder);
bf2.putLong((long) arg);
bf2.flip();
if (this.byteOrder == ByteOrder.LITTLE_ENDIAN) {//LE
bf.putInt(bf2.getInt());
bf2.getInt();//discard
} else {//BE
bf2.getInt();//discard
bf.putInt(bf2.getInt());
}
} else {
throw new IllegalArgumentException("Index[" + i + "] Unsupported arg [" + arg + "] with type [" + format + "]");
}
}
}
log.debug("Pack Result:" + Helper.Companion.toHexString(bf.array()));
return bf.array();
}
private static class UnsignedInt {
}
private static class UnsignedLong {
}
private static class UnsignedShort {
}
private static class PadByte {
}
}

@ -35,6 +35,19 @@ class Helper {
return baos.toByteArray() return baos.toByteArray()
} }
fun ByteArray.paddingWith(pageSize: UInt, paddingHead: Boolean = false): ByteArray {
val paddingNeeded = round_to_multiple(this.size.toUInt(), pageSize) - this.size.toUInt()
return if (paddingNeeded > 0u) {
if (paddingHead) {
join(Struct3("${paddingNeeded}x").pack(null), this)
} else {
join(this, Struct3("${paddingNeeded}x").pack(null))
}
} else {
this
}
}
fun join(vararg source: ByteArray): ByteArray { fun join(vararg source: ByteArray): ByteArray {
val baos = ByteArrayOutputStream() val baos = ByteArrayOutputStream()
for (src in source) { for (src in source) {
@ -275,7 +288,7 @@ class Helper {
"sha256" -> "sha-256" "sha256" -> "sha-256"
"sha384" -> "sha-384" "sha384" -> "sha-384"
"sha512" -> "sha-512" "sha512" -> "sha-512"
else -> throw IllegalArgumentException("unknown algorithm: $alg") else -> throw IllegalArgumentException("unknown algorithm: [$alg]")
} }
} }

@ -9,4 +9,4 @@ data class ParamConfig(
var dtbo: String? = UnifiedConfig.workDir + "recoveryDtbo", var dtbo: String? = UnifiedConfig.workDir + "recoveryDtbo",
var dtb: String? = UnifiedConfig.workDir + "dtb", var dtb: String? = UnifiedConfig.workDir + "dtb",
var cfg: String = UnifiedConfig.workDir + "bootimg.json", var cfg: String = UnifiedConfig.workDir + "bootimg.json",
val mkbootimg: String = "./src/mkbootimg/mkbootimg") val mkbootimg: String = "./tools/mkbootimg")

@ -1,104 +0,0 @@
package cfig
import cfig.bootimg.BootImgInfo
import de.vandermeer.asciitable.AsciiTable
import org.slf4j.LoggerFactory
import java.io.File
import kotlin.system.exitProcess
@ExperimentalUnsignedTypes
fun main(args: Array<String>) {
val log = LoggerFactory.getLogger("Launcher")
if ((args.size == 6) && args[0] in setOf("pack", "unpack", "sign")) {
if (args[1] == "vbmeta.img") {
when (args[0]) {
"unpack" -> {
if (File(UnifiedConfig.workDir).exists()) File(UnifiedConfig.workDir).deleteRecursively()
File(UnifiedConfig.workDir).mkdirs()
Avb().parseVbMeta(args[1])
}
"pack" -> {
Avb().packVbMetaWithPadding(null)
}
"sign" -> {
log.info("vbmeta is already signed")
}
}
} else {
when (args[0]) {
"unpack" -> {
if (File(UnifiedConfig.workDir).exists()) File(UnifiedConfig.workDir).deleteRecursively()
File(UnifiedConfig.workDir).mkdirs()
val info = Parser().parseBootImgHeader(fileName = args[1], avbtool = args[3])
InfoTable.instance.addRule()
InfoTable.instance.addRow("image info", ParamConfig().cfg)
if (info.signatureType == BootImgInfo.VerifyType.AVB) {
log.info("continue to analyze vbmeta info in " + args[1])
Avb().parseVbMeta(args[1])
InfoTable.instance.addRule()
InfoTable.instance.addRow("AVB info", Avb.getJsonFileName(args[1]))
if (File("vbmeta.img").exists()) {
Avb().parseVbMeta("vbmeta.img")
}
}
Parser().extractBootImg(fileName = args[1], info2 = info)
InfoTable.instance.addRule()
val tableHeader = AsciiTable().apply {
addRule()
addRow("What", "Where")
addRule()
}
log.info("\n\t\t\tUnpack Summary of ${args[1]}\n{}\n{}", tableHeader.render(), InfoTable.instance.render())
log.info("Following components are not present: ${InfoTable.missingParts}")
}
"pack" -> {
Packer().pack(mkbootfsBin = args[5])
}
"sign" -> {
Signer.sign(avbtool = args[3], bootSigner = args[4])
val readBack2 = UnifiedConfig.readBack2()
if (readBack2.signatureType == BootImgInfo.VerifyType.AVB) {
if (File("vbmeta.img").exists()) {
// val sig = readBack[2] as ImgInfo.AvbSignature
// val newBootImgInfo = Avb().parseVbMeta(args[1] + ".signed")
// val hashDesc = newBootImgInfo.auxBlob!!.hashDescriptors[0]
// val origVbMeta = ObjectMapper().readValue(File(Avb.getJsonFileName("vbmeta.img")),
// AVBInfo::class.java)
// for (i in 0..(origVbMeta.auxBlob!!.hashDescriptors.size - 1)) {
// if (origVbMeta.auxBlob!!.hashDescriptors[i].partition_name == sig.partName) {
// val seq = origVbMeta.auxBlob!!.hashDescriptors[i].sequence
// origVbMeta.auxBlob!!.hashDescriptors[i] = hashDesc
// origVbMeta.auxBlob!!.hashDescriptors[i].sequence = seq
// }
// }
// ObjectMapper().writerWithDefaultPrettyPrinter().writeValue(File(Avb.getJsonFileName("vbmeta.img")), origVbMeta)
// log.info("vbmeta.img info updated")
// Avb().packVbMetaWithPadding()
} else {
log.info("no vbmeta.img need to update")
}
}//end-of-avb
}//end-of-sign
}
}
} else {
println("Usage: unpack <boot_image_path> <mkbootimg_bin_path> <avbtool_path> <boot_signer_path> <mkbootfs_bin_path>")
println("Usage: pack <boot_image_path> <mkbootimg_bin_path> <avbtool_path> <boot_signer_path> <mkbootfs_bin_path>")
println("Usage: sign <boot_image_path> <mkbootimg_bin_path> <avbtool_path> <boot_signer_path> <mkbootfs_bin_path>")
exitProcess(1)
}
}
/*
(a * x) mod m == 1
*/
// fun modInv(a: Int, m: Int): Int {
// for (x in 0 until m) {
// if (a * x % m == 1) {
// return x
// }
// }
// throw IllegalArgumentException("modular multiplicative inverse of [$a] under modulo [$m] doesn't exist")
// }
//

@ -2,6 +2,7 @@ package cfig
import avb.AVBInfo import avb.AVBInfo
import avb.alg.Algorithms import avb.alg.Algorithms
import cfig.Avb.Companion.getJsonFileName
import cfig.bootimg.BootImgInfo import cfig.bootimg.BootImgInfo
import com.fasterxml.jackson.databind.ObjectMapper import com.fasterxml.jackson.databind.ObjectMapper
import org.apache.commons.exec.CommandLine import org.apache.commons.exec.CommandLine
@ -38,16 +39,10 @@ class Signer {
//our signer //our signer
File(cfg.info.output + ".clear").copyTo(File(cfg.info.output + ".signed")) File(cfg.info.output + ".clear").copyTo(File(cfg.info.output + ".signed"))
Avb().add_hash_footer(cfg.info.output + ".signed", Avb().addHashFooter(cfg.info.output + ".signed",
info2.imageSize, info2.imageSize,
use_persistent_digest = false,
do_not_use_ab = false,
salt = Helper.toHexString(bootDesc.salt),
hash_algorithm = bootDesc.hash_algorithm_str,
partition_name = bootDesc.partition_name, partition_name = bootDesc.partition_name,
rollback_index = ai.header!!.rollback_index.toLong(), newAvbInfo = ObjectMapper().readValue(File(getJsonFileName(cfg.info.output)), AVBInfo::class.java))
common_algorithm = alg!!.name,
inReleaseString = ai.header!!.release_string)
//original signer //original signer
File(cfg.info.output + ".clear").copyTo(File(cfg.info.output + ".signed2")) File(cfg.info.output + ".clear").copyTo(File(cfg.info.output + ".signed2"))
var cmdlineStr = "$avbtool add_hash_footer " + var cmdlineStr = "$avbtool add_hash_footer " +
@ -55,8 +50,8 @@ class Signer {
"--partition_size ${info2.imageSize} " + "--partition_size ${info2.imageSize} " +
"--salt ${Helper.toHexString(bootDesc.salt)} " + "--salt ${Helper.toHexString(bootDesc.salt)} " +
"--partition_name ${bootDesc.partition_name} " + "--partition_name ${bootDesc.partition_name} " +
"--hash_algorithm ${bootDesc.hash_algorithm_str} " + "--hash_algorithm ${bootDesc.hash_algorithm} " +
"--algorithm ${alg.name} " "--algorithm ${alg!!.name} "
if (alg.defaultKey.isNotBlank()) { if (alg.defaultKey.isNotBlank()) {
cmdlineStr += "--key ${alg.defaultKey}" cmdlineStr += "--key ${alg.defaultKey}"
} }

@ -1,5 +1,10 @@
package avb package avb
import avb.blob.AuthBlob
import avb.blob.AuxBlob
import avb.blob.Footer
import avb.blob.Header
/* /*
a wonderfaul base64 encoder/decoder: https://cryptii.com/base64-to-hex a wonderfaul base64 encoder/decoder: https://cryptii.com/base64-to-hex
*/ */

@ -1,8 +0,0 @@
package avb
@ExperimentalUnsignedTypes
data class AuthBlob(
var offset: ULong = 0U,
var size: ULong = 0U,
var hash: String? = null,
var signature: String? = null)

@ -1,9 +1,13 @@
package cfig package cfig
import avb.* import avb.AVBInfo
import avb.alg.Algorithms import avb.alg.Algorithms
import avb.blob.AuthBlob
import avb.blob.AuxBlob
import avb.blob.Footer
import avb.blob.Header
import avb.desc.* import avb.desc.*
import avb.AuxBlob import cfig.Helper.Companion.paddingWith
import cfig.io.Struct3 import cfig.io.Struct3
import com.fasterxml.jackson.databind.ObjectMapper import com.fasterxml.jackson.databind.ObjectMapper
import org.apache.commons.codec.binary.Hex import org.apache.commons.codec.binary.Hex
@ -14,7 +18,6 @@ import java.io.FileOutputStream
import java.nio.file.Files import java.nio.file.Files
import java.nio.file.Paths import java.nio.file.Paths
import java.nio.file.StandardOpenOption import java.nio.file.StandardOpenOption
import java.security.MessageDigest
@ExperimentalUnsignedTypes @ExperimentalUnsignedTypes
class Avb { class Avb {
@ -22,227 +25,119 @@ class Avb {
private val MAX_FOOTER_SIZE = 4096 private val MAX_FOOTER_SIZE = 4096
private val BLOCK_SIZE = 4096 private val BLOCK_SIZE = 4096
private var required_libavb_version_minor = 0 //migrated from: avbtool::Avb::addHashFooter
fun addHashFooter(image_file: String,
//migrated from: avbtool::Avb::add_hash_footer partition_size: Long, //aligned by Avb::BLOCK_SIZE
fun add_hash_footer(image_file: String, partition_name: String,
partition_size: Long, //aligned by Avb::BLOCK_SIZE newAvbInfo: AVBInfo) {
use_persistent_digest: Boolean, log.info("addHashFooter($image_file) ...")
do_not_use_ab: Boolean,
salt: String, imageSizeCheck(partition_size, image_file)
hash_algorithm: String,
partition_name: String, //truncate AVB footer if there is. Then addHashFooter() is idempotent
rollback_index: Long, trimFooter(image_file)
common_algorithm: String, val newImageSize = File(image_file).length()
inReleaseString: String?) {
log.info("add_hash_footer($image_file) ...") //VBmeta blob: update hash descriptor
var original_image_size: ULong newAvbInfo.apply {
//required libavb version val itr = this.auxBlob!!.hashDescriptors.iterator()
if (use_persistent_digest || do_not_use_ab) { var hd = HashDescriptor()
required_libavb_version_minor = 1 while (itr.hasNext()) {//remove previous hd entry
} val itrValue = itr.next()
log.info("Required_libavb_version: 1.$required_libavb_version_minor") if (itrValue.partition_name == partition_name) {
itr.remove()
// SIZE + metadata (footer + vbmeta struct) hd = itrValue
val max_metadata_size = MAX_VBMETA_SIZE + MAX_FOOTER_SIZE }
if (partition_size < max_metadata_size) {
throw IllegalArgumentException("Parition SIZE of $partition_size is too small. " +
"Needs to be at least $max_metadata_size")
}
val max_image_size = partition_size - max_metadata_size
log.info("max_image_size: $max_image_size")
//TODO: typical block size = 4096L, from avbtool::Avb::ImageHandler::block_size
//since boot.img is not in sparse format, we are safe to hardcode it to 4096L for now
if (partition_size % BLOCK_SIZE != 0L) {
throw IllegalArgumentException("Partition SIZE of $partition_size is not " +
"a multiple of the image block SIZE 4096")
}
//truncate AVB footer if there is. Then add_hash_footer() is idempotent
val fis = FileInputStream(image_file)
val originalFileSize = File(image_file).length()
if (originalFileSize > max_image_size) {
throw IllegalArgumentException("Image size of $originalFileSize exceeds maximum image size " +
"of $max_image_size in order to fit in a partition size of $partition_size.")
}
fis.skip(originalFileSize - 64)
try {
val footer = Footer(fis)
original_image_size = footer.originalImageSize
FileOutputStream(File(image_file), true).channel.use {
log.info("original image $image_file has AVB footer, " +
"truncate it to original SIZE: ${footer.originalImageSize}")
it.truncate(footer.originalImageSize.toLong())
} }
} catch (e: IllegalArgumentException) { //HashDescriptor
log.info("original image $image_file doesn't have AVB footer") hd.update(image_file)
original_image_size = originalFileSize.toULong() log.info("updated hash descriptor:" + Hex.encodeHexString(hd.encode()))
this.auxBlob!!.hashDescriptors.add(hd)
} }
//salt val vbmetaBlob = packVbMeta(newAvbInfo)
var saltByteArray = Helper.fromHexString(salt) log.debug("vbmeta_blob: " + Helper.toHexString(vbmetaBlob))
if (salt.isBlank()) { Helper.dumpToFile("hashDescriptor.vbmeta.blob", vbmetaBlob)
//If salt is not explicitly specified, choose a hash that's the same size as the hash size
val expectedDigestSize = MessageDigest.getInstance(Helper.pyAlg2java(hash_algorithm)).digest().size
FileInputStream(File("/dev/urandom")).use {
val randomSalt = ByteArray(expectedDigestSize)
it.read(randomSalt)
log.warn("salt is empty, using random salt[$expectedDigestSize]: " + Helper.toHexString(randomSalt))
saltByteArray = randomSalt
}
} else {
log.info("preset salt[${saltByteArray.size}] is valid: $salt")
}
//hash digest
val digest = MessageDigest.getInstance(Helper.pyAlg2java(hash_algorithm)).apply {
update(saltByteArray)
update(File(image_file).readBytes())
}.digest()
log.info("Digest(salt + file): " + Helper.toHexString(digest))
//HashDescriptor
val hd = HashDescriptor()
hd.image_size = File(image_file).length().toULong()
hd.hash_algorithm = hash_algorithm
hd.partition_name = partition_name
hd.salt = saltByteArray
hd.flags = 0U
if (do_not_use_ab) hd.flags = hd.flags or 1u
if (!use_persistent_digest) hd.digest = digest
log.info("encoded hash descriptor:" + Hex.encodeHexString(hd.encode()))
//VBmeta blob
val vbmeta_blob = generateVbMetaBlob(common_algorithm,
null,
arrayOf(hd as Descriptor),
null,
rollback_index,
0,
null,
null,
0U,
inReleaseString)
log.debug("vbmeta_blob: " + Helper.toHexString(vbmeta_blob))
Helper.dumpToFile("hashDescriptor.vbmeta.blob", vbmeta_blob)
log.info("Padding image ...")
// image + padding // image + padding
if (hd.image_size.toLong() % BLOCK_SIZE != 0L) { val imgPaddingNeeded = Helper.round_to_multiple(newImageSize, BLOCK_SIZE) - newImageSize
val padding_needed = BLOCK_SIZE - (hd.image_size.toLong() % BLOCK_SIZE)
FileOutputStream(image_file, true).use { fos ->
fos.write(ByteArray(padding_needed.toInt()))
}
log.info("$image_file padded: ${hd.image_size} -> ${File(image_file).length()}")
} else {
log.info("$image_file doesn't need padding")
}
// + vbmeta + padding // + vbmeta + padding
log.info("Appending vbmeta ...") val vbmetaOffset = File(image_file).length()
val vbmeta_offset = File(image_file).length() val vbmetaBlobWithPadding = vbmetaBlob.paddingWith(BLOCK_SIZE.toUInt())
val padding_needed = Helper.round_to_multiple(vbmeta_blob.size.toLong(), BLOCK_SIZE) - vbmeta_blob.size
val vbmeta_blob_with_padding = Helper.join(vbmeta_blob, Struct3("${padding_needed}x").pack(null))
FileOutputStream(image_file, true).use { fos ->
fos.write(vbmeta_blob_with_padding)
}
// + DONT_CARE chunk // + DONT_CARE chunk
log.info("Appending DONT_CARE chunk ...") val vbmetaEndOffset = vbmetaOffset + vbmetaBlobWithPadding.size
val vbmeta_end_offset = vbmeta_offset + vbmeta_blob_with_padding.size val dontCareChunkSize = partition_size - vbmetaEndOffset - 1 * BLOCK_SIZE
FileOutputStream(image_file, true).use { fos ->
fos.write(Struct3("${partition_size - vbmeta_end_offset - 1 * BLOCK_SIZE}x").pack(null))
}
// + AvbFooter + padding // + AvbFooter + padding
log.info("Appending footer ...") newAvbInfo.footer!!.apply {
val footer = Footer() originalImageSize = newImageSize.toULong()
footer.originalImageSize = original_image_size vbMetaOffset = vbmetaOffset.toULong()
footer.vbMetaOffset = vbmeta_offset.toULong() vbMetaSize = vbmetaBlob.size.toULong()
footer.vbMetaSize = vbmeta_blob.size.toULong() }
val footerBob = footer.encode() log.info(newAvbInfo.footer.toString())
val footerBlobWithPadding = Helper.join( val footerBlobWithPadding = newAvbInfo.footer!!.encode().paddingWith(BLOCK_SIZE.toUInt(), true)
Struct3("${BLOCK_SIZE - Footer.SIZE}x").pack(null), footerBob)
log.info("footer:" + Helper.toHexString(footerBob))
log.info(footer.toString())
FileOutputStream(image_file, true).use { fos -> FileOutputStream(image_file, true).use { fos ->
log.info("1/4 Padding image with $imgPaddingNeeded bytes ...")
fos.write(ByteArray(imgPaddingNeeded.toInt()))
log.info("2/4 Appending vbmeta (${vbmetaBlobWithPadding.size} bytes)...")
fos.write(vbmetaBlobWithPadding)
log.info("3/4 Appending DONT CARE CHUNK ($dontCareChunkSize bytes) ...")
fos.write(ByteArray(dontCareChunkSize.toInt()))
log.info("4/4 Appending AVB footer (${footerBlobWithPadding.size} bytes)...")
fos.write(footerBlobWithPadding) fos.write(footerBlobWithPadding)
} }
log.info("add_hash_footer($image_file) done ...") log.info("addHashFooter($image_file) done.")
} }
//avbtool::Avb::_generate_vbmeta_blob() private fun trimFooter(image_file: String) {
private fun generateVbMetaBlob(algorithm_name: String, var footer: Footer? = null
public_key_metadata_path: String?, FileInputStream(image_file).use {
descriptors: Array<Descriptor>, it.skip(File(image_file).length() - 64)
chain_partitions: String?, try {
inRollbackIndex: Long, footer = Footer(it)
inFlags: Long, log.info("original image $image_file has AVB footer")
props: Map<String, String>?, } catch (e: IllegalArgumentException) {
kernel_cmdlines: List<String>?, log.info("original image $image_file doesn't have AVB footer")
required_libavb_version_minor: UInt,
inReleaseString: String?): ByteArray {
//encoded descriptors
var encodedDesc: ByteArray = byteArrayOf()
descriptors.forEach { encodedDesc = Helper.join(encodedDesc, it.encode()) }
props?.let {
it.forEach { t, u ->
Helper.join(encodedDesc, PropertyDescriptor(t, u).encode())
} }
} }
kernel_cmdlines?.let { footer?.let {
it.forEach { eachCmdline -> FileOutputStream(File(image_file), true).channel.use { fc ->
Helper.join(encodedDesc, KernelCmdlineDescriptor(cmdline = eachCmdline).encode()) log.info("original image $image_file has AVB footer, " +
"truncate it to original SIZE: ${it.originalImageSize}")
fc.truncate(it.originalImageSize.toLong())
} }
} }
//algorithm }
val alg = Algorithms.get(algorithm_name)!!
//encoded pubkey
val encodedKey = AuxBlob.encodePubKey(alg)
//3 - whole aux blob
val auxBlob = Blob.getAuxDataBlob(encodedDesc, encodedKey, byteArrayOf())
//1 - whole header blob
val headerBlob = Header().apply {
bump_required_libavb_version_minor(required_libavb_version_minor)
auxiliary_data_block_size = auxBlob.size.toULong()
authentication_data_block_size = Helper.round_to_multiple(
(alg.hash_num_bytes + alg.signature_num_bytes).toLong(), 64).toULong()
algorithm_type = alg.algorithm_type.toUInt()
hash_offset = 0U
hash_size = alg.hash_num_bytes.toULong()
signature_offset = alg.hash_num_bytes.toULong()
signature_size = alg.signature_num_bytes.toULong()
descriptors_offset = 0U
descriptors_size = encodedDesc.size.toULong()
public_key_offset = descriptors_size
public_key_size = encodedKey.size.toULong()
//TODO: support pubkey metadata
public_key_metadata_size = 0U
public_key_metadata_offset = public_key_offset + public_key_size
rollback_index = inRollbackIndex.toULong() private fun imageSizeCheck(partition_size: Long, image_file: String) {
flags = inFlags.toUInt() //image size sanity check
if (inReleaseString != null) { val maxMetadataSize = MAX_VBMETA_SIZE + MAX_FOOTER_SIZE
log.info("Using preset release string: $inReleaseString") if (partition_size < maxMetadataSize) {
this.release_string = inReleaseString throw IllegalArgumentException("Parition SIZE of $partition_size is too small. " +
} "Needs to be at least $maxMetadataSize")
}.encode() }
val maxImageSize = partition_size - maxMetadataSize
log.info("max_image_size: $maxImageSize")
//2 - auth blob //TODO: typical block size = 4096L, from avbtool::Avb::ImageHandler::block_size
val authBlob = Blob.getAuthBlob(headerBlob, auxBlob, algorithm_name) //since boot.img is not in sparse format, we are safe to hardcode it to 4096L for now
if (partition_size % BLOCK_SIZE != 0L) {
throw IllegalArgumentException("Partition SIZE of $partition_size is not " +
"a multiple of the image block SIZE 4096")
}
return Helper.join(headerBlob, authBlob, auxBlob) val originalFileSize = File(image_file).length()
if (originalFileSize > maxImageSize) {
throw IllegalArgumentException("Image size of $originalFileSize exceeds maximum image size " +
"of $maxImageSize in order to fit in a partition size of $partition_size.")
}
} }
fun parseVbMeta(image_file: String): AVBInfo { fun parseVbMeta(image_file: String): AVBInfo {
@ -250,6 +145,7 @@ class Avb {
val jsonFile = getJsonFileName(image_file) val jsonFile = getJsonFileName(image_file)
var footer: Footer? = null var footer: Footer? = null
var vbMetaOffset: ULong = 0U var vbMetaOffset: ULong = 0U
// footer
FileInputStream(image_file).use { fis -> FileInputStream(image_file).use { fis ->
fis.skip(File(image_file).length() - Footer.SIZE) fis.skip(File(image_file).length() - Footer.SIZE)
try { try {
@ -261,6 +157,7 @@ class Avb {
} }
} }
// header
var vbMetaHeader = Header() var vbMetaHeader = Header()
FileInputStream(image_file).use { fis -> FileInputStream(image_file).use { fis ->
fis.skip(vbMetaOffset.toLong()) fis.skip(vbMetaOffset.toLong())
@ -273,34 +170,69 @@ class Avb {
val auxBlockOffset = authBlockOffset + vbMetaHeader.authentication_data_block_size val auxBlockOffset = authBlockOffset + vbMetaHeader.authentication_data_block_size
val descStartOffset = auxBlockOffset + vbMetaHeader.descriptors_offset val descStartOffset = auxBlockOffset + vbMetaHeader.descriptors_offset
val ai = AVBInfo() val ai = AVBInfo(vbMetaHeader, null, AuxBlob(), footer)
ai.footer = footer
ai.auxBlob = AuxBlob() // Auth blob
ai.header = vbMetaHeader if (vbMetaHeader.authentication_data_block_size > 0U) {
if (vbMetaHeader.public_key_size > 0U) { FileInputStream(image_file).use { fis ->
ai.auxBlob!!.pubkey = AuxBlob.PubKeyInfo() fis.skip(vbMetaOffset.toLong())
ai.auxBlob!!.pubkey!!.offset = vbMetaHeader.public_key_offset.toLong() fis.skip(Header.SIZE.toLong())
ai.auxBlob!!.pubkey!!.size = vbMetaHeader.public_key_size.toLong() fis.skip(vbMetaHeader.hash_offset.toLong())
} val ba = ByteArray(vbMetaHeader.hash_size.toInt())
if (vbMetaHeader.public_key_metadata_size > 0U) { fis.read(ba)
ai.auxBlob!!.pubkeyMeta = AuxBlob.PubKeyMetadataInfo() log.debug("Parsed Auth Hash (Header & Aux Blob): " + Hex.encodeHexString(ba))
ai.auxBlob!!.pubkeyMeta!!.offset = vbMetaHeader.public_key_metadata_offset.toLong() val bb = ByteArray(vbMetaHeader.signature_size.toInt())
ai.auxBlob!!.pubkeyMeta!!.size = vbMetaHeader.public_key_metadata_size.toLong() fis.read(bb)
log.debug("Parsed Auth Signature (of hash): " + Hex.encodeHexString(bb))
ai.authBlob = AuthBlob()
ai.authBlob!!.offset = authBlockOffset
ai.authBlob!!.size = vbMetaHeader.authentication_data_block_size
ai.authBlob!!.hash = Hex.encodeHexString(ba)
ai.authBlob!!.signature = Hex.encodeHexString(bb)
}
} }
// aux - desc
var descriptors: List<Any> = mutableListOf() var descriptors: List<Any> = mutableListOf()
if (vbMetaHeader.descriptors_size > 0U) { if (vbMetaHeader.descriptors_size > 0U) {
FileInputStream(image_file).use { fis -> FileInputStream(image_file).use { fis ->
fis.skip(descStartOffset.toLong()) fis.skip(descStartOffset.toLong())
descriptors = UnknownDescriptor.parseDescriptors2(fis, vbMetaHeader.descriptors_size.toLong()) descriptors = UnknownDescriptor.parseDescriptors2(fis, vbMetaHeader.descriptors_size.toLong())
} }
descriptors.forEach { descriptors.forEach {
log.debug(it.toString()) log.debug(it.toString())
when (it) {
is PropertyDescriptor -> {
ai.auxBlob!!.propertyDescriptor.add(it)
}
is HashDescriptor -> {
ai.auxBlob!!.hashDescriptors.add(it)
}
is KernelCmdlineDescriptor -> {
ai.auxBlob!!.kernelCmdlineDescriptor.add(it)
}
is HashTreeDescriptor -> {
ai.auxBlob!!.hashTreeDescriptor.add(it)
}
is ChainPartitionDescriptor -> {
ai.auxBlob!!.chainPartitionDescriptor.add(it)
}
is UnknownDescriptor -> {
ai.auxBlob!!.unknownDescriptors.add(it)
}
else -> {
throw IllegalArgumentException("invalid descriptor: $it")
}
}
} }
} }
// aux - pubkey
if (vbMetaHeader.public_key_size > 0U) { if (vbMetaHeader.public_key_size > 0U) {
ai.auxBlob!!.pubkey = AuxBlob.PubKeyInfo()
ai.auxBlob!!.pubkey!!.offset = vbMetaHeader.public_key_offset.toLong()
ai.auxBlob!!.pubkey!!.size = vbMetaHeader.public_key_size.toLong()
FileInputStream(image_file).use { fis -> FileInputStream(image_file).use { fis ->
fis.skip(auxBlockOffset.toLong()) fis.skip(auxBlockOffset.toLong())
fis.skip(vbMetaHeader.public_key_offset.toLong()) fis.skip(vbMetaHeader.public_key_offset.toLong())
@ -309,65 +241,21 @@ class Avb {
log.debug("Parsed Pub Key: " + Hex.encodeHexString(ai.auxBlob!!.pubkey!!.pubkey)) log.debug("Parsed Pub Key: " + Hex.encodeHexString(ai.auxBlob!!.pubkey!!.pubkey))
} }
} }
// aux - pkmd
if (vbMetaHeader.public_key_metadata_size > 0U) { if (vbMetaHeader.public_key_metadata_size > 0U) {
FileInputStream(image_file).use { fis -> ai.auxBlob!!.pubkeyMeta = AuxBlob.PubKeyMetadataInfo()
fis.skip(vbMetaOffset.toLong()) ai.auxBlob!!.pubkeyMeta!!.offset = vbMetaHeader.public_key_metadata_offset.toLong()
fis.skip(Header.SIZE.toLong()) ai.auxBlob!!.pubkeyMeta!!.size = vbMetaHeader.public_key_metadata_size.toLong()
fis.skip(vbMetaHeader.public_key_metadata_offset.toLong())
val ba = ByteArray(vbMetaHeader.public_key_metadata_size.toInt())
fis.read(ba)
log.debug("Parsed Pub Key Metadata: " + Hex.encodeHexString(ba))
}
}
if (vbMetaHeader.authentication_data_block_size > 0U) {
FileInputStream(image_file).use { fis -> FileInputStream(image_file).use { fis ->
fis.skip(vbMetaOffset.toLong()) fis.skip(auxBlockOffset.toLong())
fis.skip(Header.SIZE.toLong()) fis.skip(vbMetaHeader.public_key_metadata_offset.toLong())
fis.skip(vbMetaHeader.hash_offset.toLong()) ai.auxBlob!!.pubkeyMeta!!.pkmd = ByteArray(vbMetaHeader.public_key_metadata_size.toInt())
val ba = ByteArray(vbMetaHeader.hash_size.toInt()) fis.read(ai.auxBlob!!.pubkeyMeta!!.pkmd)
fis.read(ba) log.debug("Parsed Pub Key Metadata: " + Helper.toHexString(ai.auxBlob!!.pubkeyMeta!!.pkmd))
log.debug("Parsed Auth Hash (Header & Aux Blob): " + Hex.encodeHexString(ba))
val bb = ByteArray(vbMetaHeader.signature_size.toInt())
fis.read(bb)
log.debug("Parsed Auth Signature (of hash): " + Hex.encodeHexString(bb))
ai.authBlob = AuthBlob()
ai.authBlob!!.offset = authBlockOffset
ai.authBlob!!.size = vbMetaHeader.authentication_data_block_size
ai.authBlob!!.hash = Hex.encodeHexString(ba)
ai.authBlob!!.signature = Hex.encodeHexString(bb)
} }
} }
descriptors.forEach {
when (it) {
is PropertyDescriptor -> {
ai.auxBlob!!.propertyDescriptor.add(it)
}
is HashDescriptor -> {
ai.auxBlob!!.hashDescriptors.add(it)
}
is KernelCmdlineDescriptor -> {
ai.auxBlob!!.kernelCmdlineDescriptor.add(it)
}
is HashTreeDescriptor -> {
ai.auxBlob!!.hashTreeDescriptor.add(it)
}
is ChainPartitionDescriptor -> {
ai.auxBlob!!.chainPartitionDescriptor.add(it)
}
is UnknownDescriptor -> {
ai.auxBlob!!.unknownDescriptors.add(it)
}
else -> {
throw IllegalArgumentException("invalid descriptor: $it")
}
}
}
val aiStr = ObjectMapper().writerWithDefaultPrettyPrinter().writeValueAsString(ai)
log.debug(aiStr)
ObjectMapper().writerWithDefaultPrettyPrinter().writeValue(File(jsonFile), ai) ObjectMapper().writerWithDefaultPrettyPrinter().writeValue(File(jsonFile), ai)
log.info("vbmeta info written to $jsonFile") log.info("vbmeta info written to $jsonFile")
@ -377,22 +265,9 @@ class Avb {
private fun packVbMeta(info: AVBInfo? = null, image_file: String? = null): ByteArray { private fun packVbMeta(info: AVBInfo? = null, image_file: String? = null): ByteArray {
val ai = info ?: ObjectMapper().readValue(File(getJsonFileName(image_file!!)), AVBInfo::class.java) val ai = info ?: ObjectMapper().readValue(File(getJsonFileName(image_file!!)), AVBInfo::class.java)
val alg = Algorithms.get(ai.header!!.algorithm_type.toInt())!! val alg = Algorithms.get(ai.header!!.algorithm_type.toInt())!!
val encodedDesc = ai.auxBlob!!.encodeDescriptors()
//encoded pubkey
val encodedKey = AuxBlob.encodePubKey(alg)
//3 - whole aux blob //3 - whole aux blob
var auxBlob = byteArrayOf() val auxBlob = ai.auxBlob?.encode(alg) ?: byteArrayOf()
if (ai.header!!.auxiliary_data_block_size > 0U) {
if (encodedKey.contentEquals(ai.auxBlob!!.pubkey!!.pubkey)) {
log.info("Using the same key as original vbmeta")
} else {
log.warn("Using different key from original vbmeta")
}
auxBlob = Blob.getAuxDataBlob(encodedDesc, encodedKey, byteArrayOf())
} else {
log.info("No aux blob")
}
//1 - whole header blob //1 - whole header blob
val headerBlob = ai.header!!.apply { val headerBlob = ai.header!!.apply {
@ -401,7 +276,7 @@ class Avb {
(alg.hash_num_bytes + alg.signature_num_bytes).toLong(), 64).toULong() (alg.hash_num_bytes + alg.signature_num_bytes).toLong(), 64).toULong()
descriptors_offset = 0U descriptors_offset = 0U
descriptors_size = encodedDesc.size.toULong() descriptors_size = ai.auxBlob?.descriptorSize?.toULong() ?: 0U
hash_offset = 0U hash_offset = 0U
hash_size = alg.hash_num_bytes.toULong() hash_size = alg.hash_num_bytes.toULong()
@ -410,17 +285,17 @@ class Avb {
signature_size = alg.signature_num_bytes.toULong() signature_size = alg.signature_num_bytes.toULong()
public_key_offset = descriptors_size public_key_offset = descriptors_size
public_key_size = encodedKey.size.toULong() public_key_size = AuxBlob.encodePubKey(alg).size.toULong()
//TODO: support pubkey metadata public_key_metadata_size = ai.auxBlob!!.pubkeyMeta?.pkmd?.size?.toULong() ?: 0U
public_key_metadata_size = 0U
public_key_metadata_offset = public_key_offset + public_key_size public_key_metadata_offset = public_key_offset + public_key_size
log.info("pkmd size: $public_key_metadata_size, pkmd offset : $public_key_metadata_offset")
}.encode() }.encode()
//2 - auth blob //2 - auth blob
var authBlob = byteArrayOf() var authBlob = byteArrayOf()
if (ai.authBlob != null) { if (ai.authBlob != null) {
authBlob = Blob.getAuthBlob(headerBlob, auxBlob, alg.name) authBlob = AuthBlob.createBlob(headerBlob, auxBlob, alg.name)
} else { } else {
log.info("No auth blob") log.info("No auth blob")
} }

@ -1,7 +0,0 @@
package avb
@ExperimentalUnsignedTypes
class VBMeta(var header: Header? = null,
var authBlob: AuthBlob? = null,
var auxBlob: AuxBlob? = null) {
}

@ -1,4 +1,4 @@
package avb package avb.blob
import avb.alg.Algorithms import avb.alg.Algorithms
import cfig.Helper import cfig.Helper
@ -6,20 +6,14 @@ import cfig.io.Struct3
import org.slf4j.LoggerFactory import org.slf4j.LoggerFactory
import java.security.MessageDigest import java.security.MessageDigest
class Blob { @ExperimentalUnsignedTypes
@ExperimentalUnsignedTypes data class AuthBlob(
var offset: ULong = 0U,
var size: ULong = 0U,
var hash: String? = null,
var signature: String? = null) {
companion object { companion object {
private val log = LoggerFactory.getLogger(Blob::class.java) fun createBlob(header_data_blob: ByteArray,
//encoded_descriptors + encoded_key + pkmd_blob + (padding)
fun getAuxDataBlob(encodedDesc: ByteArray, encodedKey: ByteArray, pkmdBlob: ByteArray): ByteArray {
val auxSize = Helper.round_to_multiple(
(encodedDesc.size + encodedKey.size + pkmdBlob.size).toLong(),
64)
return Struct3("${auxSize}b").pack(Helper.join(encodedDesc, encodedKey, pkmdBlob))
}
fun getAuthBlob(header_data_blob: ByteArray,
aux_data_blob: ByteArray, aux_data_blob: ByteArray,
algorithm_name: String): ByteArray { algorithm_name: String): ByteArray {
val alg = Algorithms.get(algorithm_name)!! val alg = Algorithms.get(algorithm_name)!!
@ -43,5 +37,7 @@ class Blob {
val authData = Helper.join(binaryHash, binarySignature) val authData = Helper.join(binaryHash, binarySignature)
return Helper.join(authData, Struct3("${authBlockSize - authData.size}x").pack(0)) return Helper.join(authData, Struct3("${authBlockSize - authData.size}x").pack(0))
} }
private val log = LoggerFactory.getLogger(AuthBlob::class.java)
} }
} }

@ -1,16 +1,19 @@
package avb package avb.blob
import avb.alg.Algorithm import avb.alg.Algorithm
import avb.desc.* import avb.desc.*
import cfig.Helper import cfig.Helper
import cfig.io.Struct3 import cfig.io.Struct3
import com.fasterxml.jackson.annotation.JsonIgnore
import com.fasterxml.jackson.annotation.JsonIgnoreProperties
import org.junit.Assert import org.junit.Assert
import org.slf4j.LoggerFactory import org.slf4j.LoggerFactory
import java.nio.file.Files import java.nio.file.Files
import java.nio.file.Paths import java.nio.file.Paths
@ExperimentalUnsignedTypes @ExperimentalUnsignedTypes
data class AuxBlob( @JsonIgnoreProperties("descriptorSize")
class AuxBlob(
var pubkey: PubKeyInfo? = null, var pubkey: PubKeyInfo? = null,
var pubkeyMeta: PubKeyMetadataInfo? = null, var pubkeyMeta: PubKeyMetadataInfo? = null,
var propertyDescriptor: MutableList<PropertyDescriptor> = mutableListOf(), var propertyDescriptor: MutableList<PropertyDescriptor> = mutableListOf(),
@ -18,8 +21,13 @@ data class AuxBlob(
var hashDescriptors: MutableList<HashDescriptor> = mutableListOf(), var hashDescriptors: MutableList<HashDescriptor> = mutableListOf(),
var kernelCmdlineDescriptor: MutableList<KernelCmdlineDescriptor> = mutableListOf(), var kernelCmdlineDescriptor: MutableList<KernelCmdlineDescriptor> = mutableListOf(),
var chainPartitionDescriptor: MutableList<ChainPartitionDescriptor> = mutableListOf(), var chainPartitionDescriptor: MutableList<ChainPartitionDescriptor> = mutableListOf(),
var unknownDescriptors: MutableList<UnknownDescriptor> = mutableListOf() var unknownDescriptors: MutableList<UnknownDescriptor> = mutableListOf()) {
) {
val descriptorSize: Int
get(): Int {
return this.encodeDescriptors().size
}
data class PubKeyInfo( data class PubKeyInfo(
var offset: Long = 0L, var offset: Long = 0L,
var size: Long = 0L, var size: Long = 0L,
@ -32,8 +40,7 @@ data class AuxBlob(
var pkmd: ByteArray = byteArrayOf() var pkmd: ByteArray = byteArrayOf()
) )
fun encodeDescriptors(): ByteArray { private fun encodeDescriptors(): ByteArray {
var ret = byteArrayOf()
return mutableListOf<Descriptor>().let { descList -> return mutableListOf<Descriptor>().let { descList ->
arrayOf(this.propertyDescriptor, //tag 0 arrayOf(this.propertyDescriptor, //tag 0
this.hashTreeDescriptor, //tag 1 this.hashTreeDescriptor, //tag 1
@ -44,6 +51,7 @@ data class AuxBlob(
).forEach { typedList -> ).forEach { typedList ->
typedList.forEach { descList.add(it) } typedList.forEach { descList.add(it) }
} }
var ret = byteArrayOf()
descList.sortBy { it.sequence } descList.sortBy { it.sequence }
descList.forEach { ret = Helper.join(ret, it.encode()) } descList.forEach { ret = Helper.join(ret, it.encode()) }
ret ret
@ -51,14 +59,33 @@ data class AuxBlob(
} }
//encoded_descriptors + encoded_key + pkmd_blob + (padding) //encoded_descriptors + encoded_key + pkmd_blob + (padding)
fun encode(): ByteArray { fun encode(alg: Algorithm): ByteArray {
//descriptors
val encodedDesc = this.encodeDescriptors() val encodedDesc = this.encodeDescriptors()
var sumOfSize = encodedDesc.size //pubkey
this.pubkey?.let { sumOfSize += it.pubkey.size } val encodedKey = encodePubKey(alg)
this.pubkeyMeta?.let { sumOfSize += it.pkmd.size } if (this.pubkey != null) {
val auxSize = Helper.round_to_multiple(sumOfSize.toLong(), 64) if (encodedKey.contentEquals(this.pubkey!!.pubkey)) {
return Struct3("${auxSize}b").pack( log.info("Using the same key as original vbmeta")
Helper.joinWithNulls(encodedDesc, this.pubkey?.pubkey, this.pubkeyMeta?.pkmd)) } else {
log.warn("Using different key from original vbmeta")
}
} else {
log.info("no pubkey in auxBlob")
}
//pkmd
var encodedPkmd = byteArrayOf()
if (this.pubkeyMeta != null) {
encodedPkmd = this.pubkeyMeta!!.pkmd
log.warn("adding pkmd [size=${this.pubkeyMeta!!.pkmd.size}]...")
} else {
log.info("no pubkey metadata in auxBlob")
}
val auxSize = Helper.round_to_multiple(
(encodedDesc.size + encodedKey.size + encodedPkmd.size).toLong(),
64)
return Struct3("${auxSize}b").pack(Helper.join(encodedDesc, encodedKey, encodedPkmd))
} }
companion object { companion object {

@ -1,4 +1,4 @@
package avb package avb.blob
import cfig.io.Struct3 import cfig.io.Struct3
import org.junit.Assert import org.junit.Assert
@ -32,19 +32,6 @@ data class Footer constructor(
var vbMetaOffset: ULong = 0U, var vbMetaOffset: ULong = 0U,
var vbMetaSize: ULong = 0U var vbMetaSize: ULong = 0U
) { ) {
companion object {
const val MAGIC = "AVBf"
const val SIZE = 64
private const val RESERVED = 28
const val FOOTER_VERSION_MAJOR = 1U
const val FOOTER_VERSION_MINOR = 0U
private const val FORMAT_STRING = "!4s2L3Q${RESERVED}x"
init {
Assert.assertEquals(SIZE, Struct3(FORMAT_STRING).calcSize())
}
}
@Throws(IllegalArgumentException::class) @Throws(IllegalArgumentException::class)
constructor(iS: InputStream) : this() { constructor(iS: InputStream) : this() {
val info = Struct3(FORMAT_STRING).unpack(iS) val info = Struct3(FORMAT_STRING).unpack(iS)
@ -59,10 +46,13 @@ data class Footer constructor(
vbMetaSize = info[5] as ULong vbMetaSize = info[5] as ULong
} }
constructor(originalImageSize: ULong, vbMetaOffset: ULong, vbMetaSize: ULong)
: this(FOOTER_VERSION_MAJOR, FOOTER_VERSION_MINOR, originalImageSize, vbMetaOffset, vbMetaSize)
@Throws(IllegalArgumentException::class) @Throws(IllegalArgumentException::class)
constructor(image_file: String) : this() { constructor(image_file: String) : this() {
FileInputStream(image_file).use { fis -> FileInputStream(image_file).use { fis ->
fis.skip(File(image_file).length() - Footer.SIZE) fis.skip(File(image_file).length() - SIZE)
val footer = Footer(fis) val footer = Footer(fis)
this.versionMajor = footer.versionMajor this.versionMajor = footer.versionMajor
this.versionMinor = footer.versionMinor this.versionMinor = footer.versionMinor
@ -73,12 +63,25 @@ data class Footer constructor(
} }
fun encode(): ByteArray { fun encode(): ByteArray {
return Struct3(FORMAT_STRING).pack(MAGIC, return Struct3(FORMAT_STRING).pack(MAGIC, //4s
this.versionMajor, this.versionMajor, //L
this.versionMinor, this.versionMinor, //L
this.originalImageSize, this.originalImageSize, //Q
this.vbMetaOffset, this.vbMetaOffset, //Q
this.vbMetaSize, this.vbMetaSize, //Q
null) null) //${RESERVED}x
}
companion object {
private const val MAGIC = "AVBf"
const val SIZE = 64
private const val RESERVED = 28
private const val FOOTER_VERSION_MAJOR = 1U
private const val FOOTER_VERSION_MINOR = 0U
private const val FORMAT_STRING = "!4s2L3Q${RESERVED}x"
init {
Assert.assertEquals(SIZE, Struct3(FORMAT_STRING).calcSize())
}
} }
} }

@ -1,4 +1,4 @@
package avb package avb.blob
import cfig.Avb import cfig.Avb
import cfig.io.Struct3 import cfig.io.Struct3
@ -56,21 +56,21 @@ data class Header(
fun encode(): ByteArray { fun encode(): ByteArray {
return Struct3(FORMAT_STRING).pack( return Struct3(FORMAT_STRING).pack(
magic, magic, //4s
this.required_libavb_version_major, this.required_libavb_version_minor, this.required_libavb_version_major, this.required_libavb_version_minor, //2L
this.authentication_data_block_size, this.auxiliary_data_block_size, this.authentication_data_block_size, this.auxiliary_data_block_size, //2Q
this.algorithm_type, this.algorithm_type, //L
this.hash_offset, this.hash_size, this.hash_offset, this.hash_size, //hash 2Q
this.signature_offset, this.signature_size, this.signature_offset, this.signature_size, //sig 2Q
this.public_key_offset, this.public_key_size, this.public_key_offset, this.public_key_size, //pubkey 2Q
this.public_key_metadata_offset, this.public_key_metadata_size, this.public_key_metadata_offset, this.public_key_metadata_size, //pkmd 2Q
this.descriptors_offset, this.descriptors_size, this.descriptors_offset, this.descriptors_size, //desc 2Q
this.rollback_index, this.rollback_index, //Q
this.flags, this.flags, //L
null, //${REVERSED0}x null, //${REVERSED0}x
this.release_string, //47s this.release_string, //47s
null, //x null, //x
null) //${REVERSED}x null) //${REVERSED}x
} }
fun bump_required_libavb_version_minor(minor: UInt) { fun bump_required_libavb_version_minor(minor: UInt) {
@ -78,11 +78,11 @@ data class Header(
} }
companion object { companion object {
const val magic: String = "AVB0" private const val magic: String = "AVB0"
const val SIZE = 256 const val SIZE = 256
private const val REVERSED0 = 4 private const val REVERSED0 = 4
private const val REVERSED = 80 private const val REVERSED = 80
const val FORMAT_STRING = ("!4s2L2QL11QL${REVERSED0}x47sx" + "${REVERSED}x") private const val FORMAT_STRING = ("!4s2L2QL11QL${REVERSED0}x47sx" + "${REVERSED}x")
init { init {
Assert.assertEquals(SIZE, Struct3(FORMAT_STRING).calcSize()) Assert.assertEquals(SIZE, Struct3(FORMAT_STRING).calcSize())

@ -2,22 +2,36 @@ package avb.desc
import cfig.Helper import cfig.Helper
import cfig.io.Struct3 import cfig.io.Struct3
import org.apache.commons.codec.binary.Hex
import org.junit.Assert import org.junit.Assert
import org.slf4j.LoggerFactory
import java.io.File import java.io.File
import java.io.FileInputStream
import java.io.InputStream import java.io.InputStream
import java.security.MessageDigest import java.security.MessageDigest
@ExperimentalUnsignedTypes @ExperimentalUnsignedTypes
class HashDescriptor(var image_size: ULong = 0U, class HashDescriptor(var flags: UInt = 0U,
var partition_name: String = "",
var hash_algorithm: String = "", var hash_algorithm: String = "",
var hash_algorithm_str: String = "", var image_size: ULong = 0U,
var salt: ByteArray = byteArrayOf(),
var digest: ByteArray = byteArrayOf(),
var partition_name_len: UInt = 0U, var partition_name_len: UInt = 0U,
var salt_len: UInt = 0U, var salt_len: UInt = 0U,
var digest_len: UInt = 0U, var digest_len: UInt = 0U)
var flags: UInt = 0U, : Descriptor(TAG, 0U, 0) {
var partition_name: String = "", var flagsInterpretation: String = ""
var salt: ByteArray = byteArrayOf(), get() {
var digest: ByteArray = byteArrayOf()) : Descriptor(TAG, 0U, 0) { var ret = ""
if (this.flags and AVB_HASH_DESCRIPTOR_FLAGS_DO_NOT_USE_AB == 1U) {
ret += "1:no-A/B system"
} else {
ret += "0:A/B system"
}
return ret
}
constructor(data: InputStream, seq: Int = 0) : this() { constructor(data: InputStream, seq: Int = 0) : this() {
val info = Struct3(FORMAT_STRING).unpack(data) val info = Struct3(FORMAT_STRING).unpack(data)
this.tag = info[0] as ULong this.tag = info[0] as ULong
@ -39,7 +53,6 @@ class HashDescriptor(var image_size: ULong = 0U,
this.partition_name = payload[0] as String this.partition_name = payload[0] as String
this.salt = payload[1] as ByteArray this.salt = payload[1] as ByteArray
this.digest = payload[2] as ByteArray this.digest = payload[2] as ByteArray
this.hash_algorithm_str = this.hash_algorithm
} }
override fun encode(): ByteArray { override fun encode(): ByteArray {
@ -67,14 +80,54 @@ class HashDescriptor(var image_size: ULong = 0U,
val digest = hasher.digest() val digest = hasher.digest()
} }
fun update(image_file: String, use_persistent_digest: Boolean = false): HashDescriptor {
//salt
if (this.salt.isEmpty()) {
//If salt is not explicitly specified, choose a hash that's the same size as the hash size
val expectedDigestSize = MessageDigest.getInstance(Helper.pyAlg2java(hash_algorithm)).digest().size
FileInputStream(File("/dev/urandom")).use {
val randomSalt = ByteArray(expectedDigestSize)
it.read(randomSalt)
log.warn("salt is empty, using random salt[$expectedDigestSize]: " + Helper.toHexString(randomSalt))
this.salt = randomSalt
}
} else {
log.info("preset salt[${this.salt.size}] is valid: ${Hex.encodeHexString(this.salt)}")
}
//size
this.image_size = File(image_file).length().toULong()
//flags
if (this.flags and 1U == 1U) {
log.info("flag: use_ab = 0")
} else {
log.info("flag: use_ab = 1")
}
if (!use_persistent_digest) {
//hash digest
val newDigest = MessageDigest.getInstance(Helper.pyAlg2java(hash_algorithm)).apply {
update(salt)
update(File(image_file).readBytes())
}.digest()
log.info("Digest(salt + file): " + Helper.toHexString(newDigest))
this.digest = newDigest
}
return this
}
companion object { companion object {
const val TAG: ULong = 2U const val TAG: ULong = 2U
private const val RESERVED = 60 private const val RESERVED = 60
private const val SIZE = 72 + RESERVED private const val SIZE = 72 + RESERVED
private const val FORMAT_STRING = "!3Q32s4L${RESERVED}x" private const val FORMAT_STRING = "!3Q32s4L${RESERVED}x"
private val log = LoggerFactory.getLogger(HashDescriptor::class.java)
private const val AVB_HASH_DESCRIPTOR_FLAGS_DO_NOT_USE_AB = 1U
} }
override fun toString(): String { override fun toString(): String {
return "HashDescriptor(TAG=$TAG, image_size=$image_size, hash_algorithm=$hash_algorithm, flags=$flags, partition_name='$partition_name', salt=${Helper.toHexString(salt)}, digest=${Helper.toHexString(digest)})" return "HashDescriptor(TAG=$TAG, image_size=$image_size, hash_algorithm=$hash_algorithm, flags=$flags, partition_name='$partition_name', salt=${Helper.toHexString(salt)}, digest=${Helper.toHexString(digest)})"
} }
} }

@ -2,12 +2,12 @@ package avb.desc
import cfig.Helper import cfig.Helper
import cfig.io.Struct3 import cfig.io.Struct3
import org.slf4j.LoggerFactory
import java.io.InputStream import java.io.InputStream
import java.util.* import java.util.*
@ExperimentalUnsignedTypes @ExperimentalUnsignedTypes
class HashTreeDescriptor( class HashTreeDescriptor(
var flags: UInt = 0U,
var dm_verity_version: UInt = 0u, var dm_verity_version: UInt = 0u,
var image_size: ULong = 0UL, var image_size: ULong = 0UL,
var tree_offset: ULong = 0UL, var tree_offset: ULong = 0UL,
@ -20,8 +20,18 @@ class HashTreeDescriptor(
var hash_algorithm: String = "", var hash_algorithm: String = "",
var partition_name: String = "", var partition_name: String = "",
var salt: ByteArray = byteArrayOf(), var salt: ByteArray = byteArrayOf(),
var root_digest: ByteArray = byteArrayOf(), var root_digest: ByteArray = byteArrayOf()) : Descriptor(TAG, 0U, 0) {
var flags: UInt = 0U) : Descriptor(TAG, 0U, 0) { var flagsInterpretation: String = ""
get() {
var ret = ""
if (this.flags and AVB_HASHTREE_DESCRIPTOR_FLAGS_DO_NOT_USE_AB == 1U) {
ret += "1:no-A/B system"
} else {
ret += "0:A/B system"
}
return ret
}
constructor(data: InputStream, seq: Int = 0) : this() { constructor(data: InputStream, seq: Int = 0) : this() {
this.sequence = seq this.sequence = seq
val info = Struct3(FORMAT_STRING).unpack(data) val info = Struct3(FORMAT_STRING).unpack(data)
@ -87,5 +97,6 @@ class HashTreeDescriptor(
private const val RESERVED = 60L private const val RESERVED = 60L
private const val SIZE = 120 + RESERVED private const val SIZE = 120 + RESERVED
private const val FORMAT_STRING = "!2QL3Q3L2Q32s4L${RESERVED}x" private const val FORMAT_STRING = "!2QL3Q3L2Q32s4L${RESERVED}x"
private const val AVB_HASHTREE_DESCRIPTOR_FLAGS_DO_NOT_USE_AB = 1U
} }
} }

@ -9,7 +9,19 @@ import java.io.InputStream
class KernelCmdlineDescriptor( class KernelCmdlineDescriptor(
var flags: UInt = 0U, var flags: UInt = 0U,
var cmdlineLength: UInt = 0U, var cmdlineLength: UInt = 0U,
var cmdline: String = "") : Descriptor(TAG, 0U, 0) { var cmdline: String = "")
: Descriptor(TAG, 0U, 0) {
var flagsInterpretation: String = ""
get() {
var ret = ""
if (this.flags and flagHashTreeEnabled == flagHashTreeEnabled) {
ret += "$flagHashTreeEnabled: hashTree Enabled"
} else if (this.flags and flagHashTreeDisabled == flagHashTreeDisabled) {
ret += "$flagHashTreeDisabled: hashTree Disabled"
}
return ret
}
@Throws(IllegalArgumentException::class) @Throws(IllegalArgumentException::class)
constructor(data: InputStream, seq: Int = 0) : this() { constructor(data: InputStream, seq: Int = 0) : this() {
val info = Struct3(FORMAT_STRING).unpack(data) val info = Struct3(FORMAT_STRING).unpack(data)
@ -42,8 +54,10 @@ class KernelCmdlineDescriptor(
const val TAG: ULong = 3U const val TAG: ULong = 3U
const val SIZE = 24 const val SIZE = 24
const val FORMAT_STRING = "!2Q2L" //# tag, num_bytes_following (descriptor header), flags, cmdline length (bytes) const val FORMAT_STRING = "!2Q2L" //# tag, num_bytes_following (descriptor header), flags, cmdline length (bytes)
const val flagHashTreeEnabled = 1 //AVB_KERNEL_CMDLINE_FLAGS_USE_ONLY_IF_HASHTREE_NOT_DISABLED
const val flagHashTreeDisabled = 2 const val flagHashTreeEnabled = 1U
//AVB_KERNEL_CMDLINE_FLAGS_USE_ONLY_IF_HASHTREE_DISABLED
const val flagHashTreeDisabled = 2U
init { init {
Assert.assertEquals(SIZE, Struct3(FORMAT_STRING).calcSize()) Assert.assertEquals(SIZE, Struct3(FORMAT_STRING).calcSize())

@ -31,7 +31,7 @@ class UnknownDescriptor(var data: ByteArray = byteArrayOf()) : Descriptor(0U, 0U
return "UnknownDescriptor(tag=$tag, SIZE=${data.size}, data=${Hex.encodeHexString(data)}" return "UnknownDescriptor(tag=$tag, SIZE=${data.size}, data=${Hex.encodeHexString(data)}"
} }
fun analyze(): Any { fun analyze(): Descriptor {
return when (this.tag.toUInt()) { return when (this.tag.toUInt()) {
0U -> { 0U -> {
PropertyDescriptor(ByteArrayInputStream(this.encode()), this.sequence) PropertyDescriptor(ByteArrayInputStream(this.encode()), this.sequence)
@ -82,9 +82,9 @@ class UnknownDescriptor(var data: ByteArray = byteArrayOf()) : Descriptor(0U, 0U
return ret return ret
} }
fun parseDescriptors2(stream: InputStream, totalSize: Long): List<Any> { fun parseDescriptors2(stream: InputStream, totalSize: Long): List<Descriptor> {
log.info("Parse descriptors stream, SIZE = $totalSize") log.info("Parse descriptors stream, SIZE = $totalSize")
val ret: MutableList<Any> = mutableListOf() val ret: MutableList<Descriptor> = mutableListOf()
var currentSize = 0L var currentSize = 0L
var seq = 0 var seq = 0
while (true) { while (true) {

@ -195,12 +195,14 @@ open class BootImgHeader(
//refresh second bootloader size //refresh second bootloader size
if (0U == this.secondBootloaderLength) { if (0U == this.secondBootloaderLength) {
param.second = null param.second = null
this.secondBootloaderOffset = 0U
} else { } else {
this.secondBootloaderLength = File(param.second!!).length().toUInt() this.secondBootloaderLength = File(param.second!!).length().toUInt()
} }
//refresh recovery dtbo size //refresh recovery dtbo size
if (0U == this.recoveryDtboLength) { if (0U == this.recoveryDtboLength) {
param.dtbo = null param.dtbo = null
this.recoveryDtboOffset = 0U
} else { } else {
this.recoveryDtboLength = File(param.dtbo!!).length().toUInt() this.recoveryDtboLength = File(param.dtbo!!).length().toUInt()
} }

@ -79,9 +79,9 @@ class BootImgInfo(iS: InputStream?) : BootImgHeader(iS) {
if (this.secondBootloaderLength > 0U) { if (this.secondBootloaderLength > 0U) {
ret.addArgument(" --second ") ret.addArgument(" --second ")
ret.addArgument(param.second) ret.addArgument(param.second)
ret.addArgument(" --second_offset ")
ret.addArgument("0x" + Integer.toHexString(this.secondBootloaderOffset.toInt()))
} }
ret.addArgument(" --second_offset ")
ret.addArgument("0x" + Integer.toHexString(this.secondBootloaderOffset.toInt()))
if (!board.isBlank()) { if (!board.isBlank()) {
ret.addArgument(" --board ") ret.addArgument(" --board ")
ret.addArgument(board) ret.addArgument(board)
@ -118,7 +118,7 @@ class BootImgInfo(iS: InputStream?) : BootImgHeader(iS) {
ret.addArgument(" --output ") ret.addArgument(" --output ")
//ret.addArgument("boot.img" + ".google") //ret.addArgument("boot.img" + ".google")
log.info("To Commandline: " + ret.toString()) log.debug("To Commandline: " + ret.toString())
return ret return ret
} }

@ -157,6 +157,7 @@ class Packer {
val googleCmd = info2.toCommandLine().apply { val googleCmd = info2.toCommandLine().apply {
addArgument(cfg.info.output + ".google") addArgument(cfg.info.output + ".google")
} }
log.warn(googleCmd.toString())
DefaultExecutor().execute(googleCmd) DefaultExecutor().execute(googleCmd)
val ourHash = hashFileAndSize(cfg.info.output + ".clear") val ourHash = hashFileAndSize(cfg.info.output + ".clear")

@ -99,4 +99,21 @@ data class BootloaderMsg(
} }
} }
} }
fun updateBootloaderMessage(command: String, recovery: String, options: Array<String>?) {
this.command = command
this.recovery = "$recovery\n"
options?.forEach {
this.recovery += if (it.endsWith("\n")) {
it
} else {
it + "\n"
}
}
}
fun updateBootFastboot() {
this.command = "boot-fastboot"
this.recovery = ""
}
} }

@ -34,7 +34,7 @@ class KernelExtractor {
it.execute(cmd) it.execute(cmd)
log.info(cmd.toString()) log.info(cmd.toString())
val kernelVersion = File(kernelVersionFile).readLines() val kernelVersion = File(kernelVersionFile).readLines()
log.info("kernel version: " + kernelVersion) log.info("kernel version: $kernelVersion")
log.info("kernel config dumped to : $kernelConfigFile") log.info("kernel config dumped to : $kernelConfigFile")
InfoTable.instance.addRow("\\-- version $kernelVersion", kernelVersionFile) InfoTable.instance.addRow("\\-- version $kernelVersion", kernelVersionFile)
InfoTable.instance.addRow("\\-- config", kernelConfigFile) InfoTable.instance.addRow("\\-- config", kernelConfigFile)

@ -12,7 +12,7 @@ class BootImgParser : IPackable {
private val log = LoggerFactory.getLogger(BootImgParser::class.java) private val log = LoggerFactory.getLogger(BootImgParser::class.java)
override fun capabilities(): List<String> { override fun capabilities(): List<String> {
return listOf("^boot\\.img$", "^recovery\\.img$") return listOf("^boot\\.img$", "^recovery\\.img$", "^recovery-two-step\\.img$")
} }
override fun unpack(fileName: String) { override fun unpack(fileName: String) {

@ -1,6 +1,7 @@
package avb package avb
import avb.alg.Algorithms import avb.alg.Algorithms
import avb.blob.AuxBlob
import org.apache.commons.codec.binary.Hex import org.apache.commons.codec.binary.Hex
import org.junit.Assert.assertEquals import org.junit.Assert.assertEquals
import org.junit.Test import org.junit.Test

@ -1,5 +1,6 @@
package avb package avb
import avb.blob.Footer
import org.apache.commons.codec.binary.Hex import org.apache.commons.codec.binary.Hex
import org.junit.Test import org.junit.Test

@ -1,5 +1,6 @@
package avb package avb
import avb.blob.Header
import org.apache.commons.codec.binary.Hex import org.apache.commons.codec.binary.Hex
import org.junit.Test import org.junit.Test
import java.io.ByteArrayInputStream import java.io.ByteArrayInputStream

@ -1,135 +0,0 @@
import cfig.Helper
import cfig.io.Struct
import com.fasterxml.jackson.databind.ObjectMapper
import org.junit.Test
import org.junit.Assert.*
import java.io.ByteArrayInputStream
import kotlin.reflect.jvm.jvmName
@ExperimentalUnsignedTypes
class StructTest {
private fun getConvertedFormats(inStruct: Struct): ArrayList<Map<String, Int>> {
val f = inStruct.javaClass.getDeclaredField("formats")
f.isAccessible = true
val formatDumps = arrayListOf<Map<String, Int>>()
(f.get(inStruct) as ArrayList<*>).apply {
this.forEach {
@Suppress("UNCHECKED_CAST")
val format = it as Array<Any>
formatDumps.add(mapOf(format[0].toString().split(" ")[1] to (format[1] as Int)))
}
}
return formatDumps
}
private fun constructorTestFun1(inFormatString: String) {
println(ObjectMapper().writerWithDefaultPrettyPrinter().writeValueAsString(getConvertedFormats(Struct(inFormatString))))
}
@Test
fun constructorTest() {
constructorTestFun1("2s")
constructorTestFun1("2b")
constructorTestFun1("2bs")
}
@Test
fun calcSizeTest() {
assertEquals(16, Struct("<2i4b4b").calcSize())
assertEquals(16, Struct("<Q8b").calcSize())
assertEquals(2, Struct(">h").calcSize())
assertEquals(3, Struct(">3s").calcSize())
assertEquals(4, Struct("!Hh").calcSize())
Struct("<2i4b4b").dump()
try {
Struct("abcd")
throw Exception("should not reach here")
} catch (e: IllegalArgumentException) {
}
}
@Test
fun integerLE() {
//int (4B)
assertTrue(Struct("<2i").pack(1, 7321)!!.contentEquals(Helper.fromHexString("01000000991c0000")))
val ret = Struct("<2i").unpack(ByteArrayInputStream(Helper.fromHexString("01000000991c0000")))
assertEquals(2, ret.size)
assertTrue(ret[0] is Int)
assertTrue(ret[1] is Int)
assertEquals(1, ret[0] as Int)
assertEquals(7321, ret[1] as Int)
//unsigned int (4B)
assertTrue(Struct("<I").pack(2L)!!.contentEquals(Helper.fromHexString("02000000")))
assertTrue(Struct("<I").pack(2)!!.contentEquals(Helper.fromHexString("02000000")))
//greater than Int.MAX_VALUE
assertTrue(Struct("<I").pack(2147483748L)!!.contentEquals(Helper.fromHexString("64000080")))
assertTrue(Struct("<I").pack(2147483748)!!.contentEquals(Helper.fromHexString("64000080")))
try {
Struct("<I").pack(-12)
throw Exception("should not reach here")
} catch (e: IllegalArgumentException) {
}
//negative int
assertTrue(Struct("<i").pack(-333)!!.contentEquals(Helper.fromHexString("b3feffff")))
}
@Test
fun integerBE() {
run {
assertTrue(Struct(">2i").pack(1, 7321)!!.contentEquals(Helper.fromHexString("0000000100001c99")))
val ret = Struct(">2i").unpack(ByteArrayInputStream(Helper.fromHexString("0000000100001c99")))
assertEquals(1, ret[0] as Int)
assertEquals(7321, ret[1] as Int)
}
run {
assertTrue(Struct("!i").pack(-333)!!.contentEquals(Helper.fromHexString("fffffeb3")))
val ret2 = Struct("!i").unpack(ByteArrayInputStream(Helper.fromHexString("fffffeb3")))
assertEquals(-333, ret2[0] as Int)
}
}
@Test
fun byteArrayTest() {
//byte array
assertTrue(Struct("<4b").pack(byteArrayOf(-128, 2, 55, 127))!!.contentEquals(Helper.fromHexString("8002377f")))
assertTrue(Struct("<4b").pack(intArrayOf(0, 55, 202, 0xff))!!.contentEquals(Helper.fromHexString("0037caff")))
try {
Struct("b").pack(intArrayOf(256))
throw Exception("should not reach here")
} catch (e: IllegalArgumentException) {
}
try {
Struct("b").pack(intArrayOf(-1))
throw Exception("should not reach here")
} catch (e: IllegalArgumentException) {
}
}
@Test
fun packCombinedTest() {
assertTrue(Struct("<2i4b4b").pack(
1, 7321, byteArrayOf(1, 2, 3, 4), byteArrayOf(200.toByte(), 201.toByte(), 202.toByte(), 203.toByte()))!!
.contentEquals(Helper.fromHexString("01000000991c000001020304c8c9cacb")))
assertTrue(Struct("<2i4b4b").pack(
1, 7321, byteArrayOf(1, 2, 3, 4), intArrayOf(200, 201, 202, 203))!!
.contentEquals(Helper.fromHexString("01000000991c000001020304c8c9cacb")))
}
@Test
fun paddingTest() {
assertTrue(Struct("b2x").pack(byteArrayOf(0x13), null)!!.contentEquals(Helper.fromHexString("130000")))
assertTrue(Struct("b2xi").pack(byteArrayOf(0x13), null, 55)!!.contentEquals(Helper.fromHexString("13000037000000")))
}
@Test
fun stringTest() {
Struct("5s").pack("Good".toByteArray())!!.contentEquals(Helper.fromHexString("476f6f6400"))
Struct("5s1b").pack("Good".toByteArray(), byteArrayOf(13))!!.contentEquals(Helper.fromHexString("476f6f64000d"))
}
}

@ -1,5 +1,6 @@
package init package init
import cfig.bootloader_message.BootloaderMsg
import cfig.init.Reboot import cfig.init.Reboot
import org.junit.Test import org.junit.Test
import java.util.* import java.util.*
@ -23,17 +24,40 @@ class RebootTest {
} }
@Test @Test
fun fastbootd() { fun bootloader() {
Reboot.handlePowerctlMessage("reboot,bootloader") Reboot.handlePowerctlMessage("reboot,bootloader")
}
@Test
fun fastboot2bootloader() {
val props = Properties() val props = Properties()
Reboot.handlePowerctlMessage("reboot,fastboot", props) Reboot.handlePowerctlMessage("reboot,fastboot", props)
}
@Test
fun fastbootd() {
val props = Properties()
props.put(Reboot.dynamicPartitionKey, "true") props.put(Reboot.dynamicPartitionKey, "true")
Reboot.handlePowerctlMessage("reboot,fastboot", props) Reboot.handlePowerctlMessage("reboot,fastboot", props)
} }
@Test
fun fastbootd2() {
val msg = BootloaderMsg()
msg.updateBootloaderMessage("boot-fastboot", "recovery", null)
msg.writeBootloaderMessage()
}
@Test @Test
fun sideload() { fun sideload() {
Reboot.handlePowerctlMessage("reboot,sideload-auto-reboot") Reboot.handlePowerctlMessage("reboot,sideload-auto-reboot")
Reboot.handlePowerctlMessage("reboot,sideload") Reboot.handlePowerctlMessage("reboot,sideload")
} }
@Test
fun rescue() {
val msg = BootloaderMsg()
msg.updateBootloaderMessage("boot-rescue", "recovery", null)
msg.writeBootloaderMessage()
}
} }

@ -32,6 +32,7 @@ if (parseGradleVersion(gradle.gradleVersion) < 5) {
} }
def workdir = 'build/unzip_boot' def workdir = 'build/unzip_boot'
def GROUP_ANDROID = "Android"
project.ext.rootWorkDir = new File(workdir).getAbsolutePath() project.ext.rootWorkDir = new File(workdir).getAbsolutePath()
String activeImg = "boot.img" String activeImg = "boot.img"
String activePath = "/boot" String activePath = "/boot"
@ -49,7 +50,7 @@ if (new File("boot.img").exists()) {
activePath = "/vbmeta" activePath = "/vbmeta"
} }
project.ext.outClearIMg = new File(String.format("%s.clear", activeImg)).getAbsolutePath() project.ext.outClearIMg = new File(String.format("%s.clear", activeImg)).getAbsolutePath()
project.ext.mkbootimgBin = new File("src/mkbootimg/mkbootimg").getAbsolutePath() project.ext.mkbootimgBin = new File("tools/mkbootimg").getAbsolutePath()
project.ext.mkbootfsBin = new File("mkbootfs/build/exe/mkbootfs/mkbootfs").getAbsolutePath() project.ext.mkbootfsBin = new File("mkbootfs/build/exe/mkbootfs/mkbootfs").getAbsolutePath()
project.ext.avbtool = new File("avb/avbtool").getAbsolutePath() project.ext.avbtool = new File("avb/avbtool").getAbsolutePath()
project.ext.bootSigner = new File("boot_signer/build/libs/boot_signer.jar").getAbsolutePath() project.ext.bootSigner = new File("boot_signer/build/libs/boot_signer.jar").getAbsolutePath()
@ -58,49 +59,14 @@ logger.warn("Active image target: " + activeImg)
// ---------------------------------------------------------------------------- // ----------------------------------------------------------------------------
// tasks // tasks
// ---------------------------------------------------------------------------- // ----------------------------------------------------------------------------
task unpack(type: JavaExec, dependsOn: ["bbootimg:jar"]) {
classpath = sourceSets.main.runtimeClasspath
main = "cfig.RKt"
classpath = files("bbootimg/build/libs/bbootimg.jar")
maxHeapSize '512m'
args "unpack", activeImg, rootProject.mkbootimgBin, rootProject.avbtool, rootProject.bootSigner, rootProject.mkbootfsBin
}
task packClear(type: JavaExec, dependsOn: ["bbootimg:jar", "mkbootfs:mkbootfsExecutable"]) {
classpath = sourceSets.main.runtimeClasspath
main = "cfig.RKt"
classpath = files("bbootimg/build/libs/bbootimg.jar")
maxHeapSize '512m'
args "pack", activeImg, rootProject.mkbootimgBin, rootProject.avbtool, rootProject.bootSigner, rootProject.mkbootfsBin
}
task sign(type: JavaExec, dependsOn: ["bbootimg:jar", packClear, "boot_signer:jar"]) {
classpath = sourceSets.main.runtimeClasspath
main = "cfig.RKt"
classpath = files("bbootimg/build/libs/bbootimg.jar")
maxHeapSize '4096m'
args "sign", activeImg, rootProject.mkbootimgBin, rootProject.avbtool, rootProject.bootSigner, rootProject.mkbootfsBin
}
task signTest(type: JavaExec, dependsOn: ["boot_signer:jar"]) {
main = 'com.android.verity.BootSignature'
classpath = files("boot_signer/build/libs/boot_signer.jar")
maxHeapSize '512m'
args activePath, activeImg + '.clear', 'security/verity.pk8', 'security/verity.x509.pem', activeImg + '.signed', rootProject.mkbootfsBin
}
task pack(dependsOn: sign) {
doLast {
println("Pack task finished: " + activeImg + ".signed")
}
}
task _setup(type: Copy) { task _setup(type: Copy) {
group GROUP_ANDROID
from 'src/test/resources/boot.img' from 'src/test/resources/boot.img'
into '.' into '.'
} }
task pull() { task pull() {
group GROUP_ANDROID
doFirst { doFirst {
println("Pulling ...") println("Pulling ...")
} }
@ -179,6 +145,7 @@ void updateBootImage(String activeImg) {
} }
task flash { task flash {
group GROUP_ANDROID
doLast { doLast {
updateBootImage(activeImg) updateBootImage(activeImg)
updateBootImage("vbmeta.img") updateBootImage("vbmeta.img")
@ -190,19 +157,22 @@ void rebootRecovery() {
} }
task rr { task rr {
group GROUP_ANDROID
doLast { doLast {
rebootRecovery() rebootRecovery()
} }
} }
task u(type: JavaExec, dependsOn: ["bbootimg:jar"]) { task unpack(type: JavaExec, dependsOn: ["bbootimg:jar"]) {
group GROUP_ANDROID
main = "cfig.packable.PackableLauncherKt" main = "cfig.packable.PackableLauncherKt"
classpath = files("bbootimg/build/libs/bbootimg.jar") classpath = files("bbootimg/build/libs/bbootimg.jar")
maxHeapSize '512m' maxHeapSize '512m'
args "unpack" args "unpack"
} }
task p(type: JavaExec, dependsOn: ["bbootimg:jar", "mkbootfs:mkbootfsExecutable"]) { task pack(type: JavaExec, dependsOn: ["bbootimg:jar", "mkbootfs:mkbootfsExecutable"]) {
group GROUP_ANDROID
main = "cfig.packable.PackableLauncherKt" main = "cfig.packable.PackableLauncherKt"
classpath = files("bbootimg/build/libs/bbootimg.jar") classpath = files("bbootimg/build/libs/bbootimg.jar")
maxHeapSize '512m' maxHeapSize '512m'

@ -46,9 +46,9 @@
|--------------------------------+--------------------------| --> 608 (0x260) |--------------------------------+--------------------------| --> 608 (0x260)
|<cmdline part 2> | 1024 | |<cmdline part 2> | 1024 |
|--------------------------------+--------------------------| --> 1632 (0x660) |--------------------------------+--------------------------| --> 1632 (0x660)
|<dtbo length> [v1] | 4 | |<recovery dtbo length> [v1] | 4 |
|--------------------------------+--------------------------| --> 1636 |--------------------------------+--------------------------| --> 1636
|<dtbo offset> [v1] | 8 | |<recovery dtbo offset> [v1] | 8 |
|--------------------------------+--------------------------| --> 1644 |--------------------------------+--------------------------| --> 1644
|<header size> [v1] | 4 (v1: value=1648) | |<header size> [v1] | 4 (v1: value=1648) |
| | (v2: value=1660) | | | (v2: value=1660) |
@ -112,7 +112,7 @@
| | - Header Magic "AVB0" | 4 | | | - Header Magic "AVB0" | 4 |
| | - avb_version Major | 4 | | | - avb_version Major | 4 |
| | - avb_version Minor | 4 | | | - avb_version Minor | 4 |
| | - authentication blob size | 8 | | | - authentication_blob_size | 8 |
| | - auxiliary blob size | 8 | | | - auxiliary blob size | 8 |
| | - algorithm type | 4 | | | - algorithm type | 4 |
| | - hash_offset | 8 | | | - hash_offset | 8 |
@ -133,14 +133,14 @@
| | - RESERVED | 80 | | | - RESERVED | 80 |
| |--------------------------------+-------------------------+ --> + 256 | |--------------------------------+-------------------------+ --> + 256
| | Authentication Blob | | | | Authentication Blob | |
| | - Hash of Header & Aux Blob | alg.hash_num_bytes | | | - Hash of Header & Aux Blob | alg.hash_num_bytes | --> + 256 + hash_offset
| | - Signature of Hash | alg.signature_num_bytes | | | - Signature of Hash | alg.signature_num_bytes | --> + 256 + signature_offset
| | - Padding | align by 64 | | | - Padding | align by 64 |
| +--------------------------------+-------------------------+ | +--------------------------------+-------------------------+
| | Auxiliary Blob | | | | Auxiliary Blob | |
| | - descriptors | | --> + 256 + descriptors_offset | | - descriptors | | --> + 256 + authentication_blob_size + descriptors_offset
| | - pub key | | --> + 256 + pub_key_offset | | - pub key | | --> + 256 + authentication_blob_size + pub_key_offset
| | - pub key meta data | | --> + 256 + pub_key_metadata_offset | | - pub key meta data | | --> + 256 + authentication_blob_size + pub_key_metadata_offset
| | - padding | align by 64 | | | - padding | align by 64 |
| +--------------------------------+-------------------------+ | +--------------------------------+-------------------------+
| | Padding | align by block_size | | | Padding | align by block_size |

@ -52,7 +52,7 @@ def verifySingleJson(inResourceDir, inImageDir, jsonFile):
subprocess.check_call("gradle pack", shell = True) subprocess.check_call("gradle pack", shell = True)
for k, v in verifyItems["hash"].items(): for k, v in verifyItems["hash"].items():
log.info("%s : %s" % (k, v)) log.info("%s : %s" % (k, v))
unittest.TestCase().assertEqual(hashFile(k), v) unittest.TestCase().assertEqual(v, hashFile(k))
def verifySingleDir(inResourceDir, inImageDir): def verifySingleDir(inResourceDir, inImageDir):
resDir = inResourceDir resDir = inResourceDir

@ -1 +1 @@
Subproject commit b33e958f598f8cb56df60a6f50654c09e85b4996 Subproject commit 42ceedb9271b5ebd3cefd0815719ee5ddcd5f130

@ -70,13 +70,15 @@ def write_header(args):
raise ValueError('Boot header version %d not supported' % args.header_version) raise ValueError('Boot header version %d not supported' % args.header_version)
args.output.write(pack('8s', BOOT_MAGIC)) args.output.write(pack('8s', BOOT_MAGIC))
final_ramdisk_offset = (args.base + args.ramdisk_offset) if filesize(args.ramdisk) > 0 else 0
final_second_offset = (args.base + args.second_offset) if filesize(args.second) > 0 else 0
args.output.write(pack('10I', args.output.write(pack('10I',
filesize(args.kernel), # size in bytes filesize(args.kernel), # size in bytes
args.base + args.kernel_offset, # physical load addr args.base + args.kernel_offset, # physical load addr
filesize(args.ramdisk), # size in bytes filesize(args.ramdisk), # size in bytes
args.base + args.ramdisk_offset, # physical load addr final_ramdisk_offset, # physical load addr
filesize(args.second), # size in bytes filesize(args.second), # size in bytes
args.base + args.second_offset, # physical load addr final_second_offset, # physical load addr
args.base + args.tags_offset, # physical addr for kernel tags args.base + args.tags_offset, # physical addr for kernel tags
args.pagesize, # flash page size we assume args.pagesize, # flash page size we assume
args.header_version, # version of bootimage header args.header_version, # version of bootimage header
@ -113,6 +115,10 @@ def write_header(args):
args.output.write(pack('I', BOOT_IMAGE_HEADER_V2_SIZE)) args.output.write(pack('I', BOOT_IMAGE_HEADER_V2_SIZE))
if args.header_version > 1: if args.header_version > 1:
if filesize(args.dtb) == 0:
raise ValueError("DTB image must not be empty.")
args.output.write(pack('I', filesize(args.dtb))) # size in bytes args.output.write(pack('I', filesize(args.dtb))) # size in bytes
args.output.write(pack('Q', args.base + args.dtb_offset)) # dtb physical load address args.output.write(pack('Q', args.base + args.dtb_offset)) # dtb physical load address
pad_file(args.output, args.pagesize) pad_file(args.output, args.pagesize)
Loading…
Cancel
Save