So, hexadecimal uses 16 characters. Each character stores 4 bits of data (2⁴ = 16).
If you use the 10 digits and 26 letters of the Latin alphabet, the resulting encoding is called Base36.
It is a rather impractical format for storing data, though, because for purposes of simple conversion, the number of possibilities should be a power of 2 -- that way a program can do (quick) bit shifts instead of (difficult, especially on big numbers) division to determine which character to use. That's why it's mostly used to encode numbers, and not large sequences of data.
Base32 is a slightly-smaller variant that can fit 5 bits of data into one character. (2⁵ = 32)
If you add up digits, uppercase and lowercase characters together (differentiating between upper and lower case), you get 62. This is also an impractical number for computer purposes. But add two extra characters and you get 64, which is another nice power of two (2⁶ = 64), letting one character store 6 bits. And Base64 is a common encoding scheme for data.
And when you know how many bits a character can fit, you can calculate how "efficient" the encoding will be and how many characters will be needed to store data. A Base32 encoding will need 20% fewer characters than hexadecimal, and Base64 needs 33.3% fewer.
If you're using Linux (or macOS or MinGW or CygWin or MSYS), you can do something like this in the terminal:
xxd -r -ps | base64
The first command will read the standard input and decode hex strings back into raw data, and the second one will do base64 to the output.
If I pass the hex string mentioned in your original post through this command, I get: