Computing: Floating Point Binary Numbers

We’re trying to put -11/32 into a floating point binary number with 8 bits for the mantissa and exponent (each).

We get 10101000 11111111

mantissa: 1.0101
exponent: -1 (dec)

but our textbook gets 10100000 11111111
which we decode back to get -3/8

mantissa: 1.01
exp: -1 (dec)

Can anyone confirm why these are different / who is right?
(we are going mad)

Thanks!

Edit: I should say, we’re using two’s complement for both the mantissa and exponent parts.