Inline functions vs Preprocessor macros

[Origin]: https://stackoverflow.com/questions/1137575/inline-functions-vs-preprocessor-macros

Preprocessor macros are just substitution patterns applied to your code. They can be used almost anywhere in your code because they are replaced with their expansions before any compilation starts.

Inline functions are actual functions whose body is directly injected into their call site. They can only be used where a function call is appropriate.

Now, as far as using macros vs. inline functions in a function-like context, be advised that:

  • Macros are not type safe, and can be expanded regardless of whether they are syntatically correct – the compile phase will report errors resulting from macro expansion problems.
  • Macros can be used in context where you don’t expect, resulting in problems
  • Macros are more flexible, in that they can expand other macros – whereas inline functions don’t necessarily do this.
  • Macros can result in side effects because of their expansion, since the input expressions are copied wherever they appear in the pattern.
  • Inline function are not always guaranteed to be inlined – some compilers only do this in release builds, or when they are specifically configured to do so. Also, in some cases inlining may not be possible.
  • Inline functions can provide scope for variables (particularly static ones), preprocessor macros can only do this in code blocks {…}, and static variables will not behave exactly the same way.
shareedit

 

First, the preprocessor macros are just “copy paste” in the code before the compilation. So there is no type checking, and some side effects can appear

For example, if you want to compare 2 values:

#define max(a,b) ((a<b)?b:a)

The side effects appear if you use max(a++,b++) for example (a or b will be incremented twice). Instead, use (for example)

inline int max( int a, int b) { return ((a<b)?b:a); }
shareedit
Advertisements

Using .NET how to convert ISO 8859-1 encoded text files that contain Latin-1 accented characters to UTF-8

[Origin]: http://stackoverflow.com/questions/2595442/using-net-how-to-convert-iso-8859-1-encoded-text-files-that-contain-latin-1-acc

I am being sent text files saved in ISO 88591-1 format that contain accented characters from the Latin-1 range (as well as normal ASCII a-z, etc.). How do I convert these files to UTF-8 using C# so that the single-byte accented characters in ISO 8859-1 become valid UTF-8 characters?

I have tried to use a StreamReader with ASCIIEncoding, and then converting the ASCII string to UTF-8 by instantiating encoding ascii and encoding utf8 and then using Encoding.Convert(ascii, utf8, ascii.GetBytes( asciiString) ) — but the accented characters are being rendered as question marks.

What step am I missing?

shareedit

You need to get the proper Encoding object. ASCII is just as it’s named: ASCII, meaning that it only supports 7-bit ASCII characters. If what you want to do is convert files, then this is likely easier than dealing with the byte arrays directly.

using (System.IO.StreamReader reader = new System.IO.StreamReader(fileName,
                                       Encoding.GetEncoding("iso-8859-1")))
{
    using (System.IO.StreamWriter writer = new System.IO.StreamWriter(
                                           outFileName, Encoding.UTF8))
    {
        writer.Write(reader.ReadToEnd());
    }
}

However, if you want to have the byte arrays yourself, it’s easy enough to do with Encoding.Convert.

byte[] converted = Encoding.Convert(Encoding.GetEncoding("iso-8859-1"), 
    Encoding.UTF8, data);

It’s important to note here, however, that if you want to go down this road then you should not use an encoding-based string reader like StreamReader for your file IO. FileStream would be better suited, as it will read the actual bytes of the files.

In the interest of fully exploring the issue, something like this would work:

using (System.IO.FileStream input = new System.IO.FileStream(fileName,
                                    System.IO.FileMode.Open, 
                                    System.IO.FileAccess.Read))
{
    byte[] buffer = new byte[input.Length];

    int readLength = 0;

    while (readLength &lt; buffer.Length) 
        readLength += input.Read(buffer, readLength, buffer.Length - readLength);

    byte[] converted = Encoding.Convert(Encoding.GetEncoding("iso-8859-1"), 
                       Encoding.UTF8, buffer);

    using (System.IO.FileStream output = new System.IO.FileStream(outFileName,
                                         System.IO.FileMode.Create, 
                                         System.IO.FileAccess.Write))
    {
        output.Write(converted, 0, converted.Length);
    }
}

In this example, the buffer variable gets filled with the actual data in the file as a byte[], so no conversion is done. Encoding.Convert specifies a source and destination encoding, then stores the converted bytes in the variable named…converted. This is then written to the output file directly.

Like I said, the first option using StreamReader and StreamWriter will be much simpler if this is all you’re doing, but the latter example should give you more of a hint as to what’s actually going on.

shareedit

How do you get assembler output from C/C++ source in gcc?

[Origin]: http://stackoverflow.com/questions/137038/how-do-you-get-assembler-output-from-c-c-source-in-gcc

Use the -S option to gcc (or g++).

gcc -S helloworld.c

This will run the preprocessor (cpp) over helloworld.c, perform the initial compilation and then stop before the assembler is run.

By default this will output a file helloworld.s. The output file can be still be set by using the -ooption.

gcc -S -o my_asm_output.s helloworld.c

Of course this only works if you have the original source. An alternative if you only have the resultant object file is to use objdump, by setting the --disassemble option (or -d for the abbreviated form).

objdump -S --disassemble helloworld &gt; helloworld.dump

This option works best if debugging option is enabled for the object file (-g at compilation time) and the file hasn’t been stripped.

Running file helloworld will give you some indication as to the level of detail that you will get by using objdump.

shareedit