Cryptocurrency trading bottag
11 commentsElectrum coins
Join Stack Overflow to learn, share knowledge, and build your career. I have binary data in an unsigned char variable. I need to convert them to PEM base64 in c. I looked in openssl library but i could not find any function. Does any body have any idea? Keep in mind that this doesn't do any error-checking while decoding - non base 64 encoded data will get processed. But you can also do it in openssl openssl enc command does it I know this question is quite old, but I was getting confused by the amount of solutions provided - each one of them claiming to be faster and better.
I put together a project on github to compare the base64 encoders and decoders: Also tests were conducted using Visual Studio The two fastest encoder implementations I found were Jouni Malinen's at http: Here is the time in microseconds to encode 32K of data using the different algorithms I have tested up to now:.
Here are the decoding results and I must admit that I am a bit surprised:. None of answers satisfied my needs, I needed simple two-function solution for encoding and decoding, but I was too lazy to write my own code, so I found this:. It is lightweight and perhaps the fastest publicly available implementation. It's also a dedicated stand-alone base64 encoding library, which can be nice if you don't need all the other stuff that comes from using a larger library such as OpenSSL or glib.
You can also play around on your own, e. Know ye all persons by these presents that you should not confuse "playing around on your own" with "implementing a standard.
The performance boost is acieved by using a lookup table for encoding and decoding. You'll need to link with the "crypto" library which is OpenSSL. This has been checked for leaks with valgrind although you could add some additional error checking code to make it a bit better - I know at least the write function should check for return value. Small improvement to the code from ryyst who got the most votes is to not use dynamically allocated decoding table but rather static const precomputed table.
Another solution could be to use std:: Here is an optimized version of encoder for the accepted answer, that also supports line-breaking for MIME and other protocols simlar optimization can be applied to the decoder:. Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site the association bonus does not count.
Would you like to answer one of these unanswered questions instead? Email Sign Up or sign in with Google. How do I base64 encode decode in C? I have a github repository with tested base64 and unbase64 functions. The only header you need is base Here's the one I'm using: It doesn't make any sense to use this if there is a library. You can skip the libm and math. I realize it is "no error checking", but especially notice that although the decoding table in the decoder is an array of , since char is signed on most architectures, you are really indexing from to Any character with the high bit set will cause you to read outside the allocated memory.
Forcing the data lookup to be an unsigned char clears that up. You still get garbage out for garbage in, but you won't segfault. Together with all other errors a pretty bad implementation. Piotr Lesnicki 6, 2 23 Here is the time in microseconds to encode 32K of data using the different algorithms I have tested up to now: Here is the one from Jouni Malinen that I slightly modified to return a std:: Slightly modified to return a std:: Here are the decoding results and I must admit that I am a bit surprised: Here is the code for the sake of completeness: GaspardP 1, 10 I really don't think std:: Aug 23 '17 at While this link may answer the question, it is better to include the essential parts of the answer here and provide the link for reference.
Link-only answers can become invalid if the linked page changes. None of answers satisfied my needs, I needed simple two-function solution for encoding and decoding, but I was too lazy to write my own code, so I found this: Putting the code below in case the site goes down: In no event will the author be held liable for any damages arising from the use of this software.
Permission is granted to anyone to use this software for any purpose, including commercial applications, and to alter it and redistribute it freely, subject to the following restrictions: The origin of this source code must not be misrepresented; you must not claim that you wrote the original source code. If you use this source code in a product, an acknowledgment in the product documentation would be appreciated but is not required.
Altered source versions must be plainly marked as such, and must not be misrepresented as being the original source code. This notice may not be removed or altered from any source distribution.
Dan Moulding k 15 82 After an hour of debugging I figured out that libb64 assumes that char is signed on the target system Note the sourceforge implementation adds newlines which are not universally supported. A fork by BuLogics on github removes them, and I've generated a pull request based on your extremely useful finding, Noir.
Norman Ramsey k 50 Here is a list of base64 encoding variants. I do not see a base64 encoding variant with the ordering of characters you use.
But the math behind the algorithm is correct. Here's my solution using OpenSSL. Necessary for b64 encoding, because of pad characters. On the "Adds a null-terminator" line I get an AddressSanitizer error that the write overflows the heap by 1 byte. Thanks, I have corrected the error, in addition to doing extensive testing with randomly-sized strings of random bytes to ensure that the code works as advertised. I compiled it with cc -o base base. It produced this output: Original character string is: Base64 encode this string!
Base encoded string is: Here's the decoder I've been using for years LarryF 3, 3 22 It's just a very simple operation that makes sure the dest buffer is set to NULL in case the caller did not do that before the call, and if perhaps the decode failed, the returned buffer would be zero length. I didn't say I debugged, traced, and profiled this routine, it's just one I've been using for years. Maybe I'll just edit it out. Thanks for pointing it out! Yeah, caused nasty bug in our app. Hi Larry, thanks for sharing you code.
Added code inline by request. The linked blog no longer seems to exist at that URL. HulkHolden It's still available here tmplusplus. Homer6 9, 8 45 Your current implementation has a memory leak. The decoding table is computed as follows: Here is an optimized version of encoder for the accepted answer, that also supports line-breaking for MIME and other protocols simlar optimization can be applied to the decoder: Amit Bens 1 8 Stack Overflow for Teams is Now Available.
Stack Overflow works best with JavaScript enabled.