<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Milad Kahsari Alhadi</title>
    <description>The latest articles on Forem by Milad Kahsari Alhadi (@clightning).</description>
    <link>https://forem.com/clightning</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/clightning"/>
    <language>en</language>
    <item>
      <title>I/O Prioritization in Windows OS</title>
      <dc:creator>Milad Kahsari Alhadi</dc:creator>
      <pubDate>Mon, 01 Feb 2021 16:54:37 +0000</pubDate>
      <link>https://forem.com/clightning/i-o-prioritization-in-windows-os-15dm</link>
      <guid>https://forem.com/clightning/i-o-prioritization-in-windows-os-15dm</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;As you may know, after introducing Windows Vista (NTkernel release 6.0), Microsoft has improved the NT kernel ecosystem from different aspects. For example, the Vista operating system implements new features for mitigating I/O bottlenecks.&lt;/p&gt;

&lt;p&gt;Since disk or network I/O is often the performance bottleneck and many processes must contest for I/O access, so I/O prioritization in Windows NT6 was widely acclaimed. Unfortunately, it isn’t fully in use yet. As you’ll see, there are only two application usable levels — Normal and Very Low (Background mode) in Vista but it may be more levels in post vista operating systems like Windows 7, 8.1, or 10.&lt;/p&gt;

&lt;p&gt;The application mechanism which Microsoft provided is to enter into a background mode via the SetProcessPriorityClass and/or SetThreadPriority API. However, these can not be set for external processes, only the calling process. This lets the OS control all priorities, setting to an appropriate background I/O priority, and an Idle CPU priority.&lt;/p&gt;

&lt;p&gt;Note there are also distinguishing features added to nt6 like the new Multimedia Class Scheduler service and also Bandwidth Reservation. These features attempt to guarantee I/O availability for playback in programs like Windows Media Player that register themselves with the Multimedia Scheduler. This is what you should do if you need reliable bandwidth streaming in.&lt;/p&gt;

&lt;p&gt;Though, besides these improvements, Microsoft has made a lot of improvements for security issues on the Vista operating system include User Account Control, parental controls, Network Access Protection, a built-in anti-malware tool, and new digital content protection mechanisms which these features will not be discussed in this article.&lt;/p&gt;

&lt;p&gt;I am going to discuss in this paper the need for prioritization, describes the various tactics that Microsoft uses to keep the system responsive, and provides information and guidelines for application and device driver developers to leverage these approaches.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xKxSaYAN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/px50p7igzz08bpz3dwga.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xKxSaYAN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/px50p7igzz08bpz3dwga.png" alt="2"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  The I/O Prioritization Concept
&lt;/h1&gt;

&lt;p&gt;To keep the operating system throughput and also its responsiveness balanced, processes and also threads are given priorities so that more critical processes/threads are scheduled more frequently or given longer time slices (quantum slices). &lt;/p&gt;

&lt;p&gt;However, with today’s advanced systems, even low-priority background threads have the resources to create frequent and large I/O requests.&lt;/p&gt;

&lt;p&gt;These I/O requests are created without consideration of priority. Consequently, threads create I/O without the context for when the I/O is needed, how critical the I/O is, and how the I/O will be used. If a low-priority thread gets CPU time, it could easily queue hundreds or thousands of I/O requests in a concise time.&lt;/p&gt;

&lt;p&gt;Because I/O requests typically require time to process it is possible that a low-priority thread could significantly affect the responsiveness of the system by suspending high-priority threads, which prevents them from getting their work done. Because of this, you can see a machine become less responsive when executing long-running low-priority services such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;disk defragmenters,&lt;/li&gt;
&lt;li&gt;multimedia based apps,&lt;/li&gt;
&lt;li&gt;anti-ransomware apps,&lt;/li&gt;
&lt;li&gt;networked-based apps&lt;/li&gt;
&lt;li&gt;and so on.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;However, every thread has a base-priority level determined by the thread’s priority value and the priority class of its process. The operating system uses the base-priority level of all executable threads to determine which thread gets the next slice of CPU time.&lt;/p&gt;

&lt;p&gt;So if we wanted to know what is the base priority of a thread, we can call GetThreadPriority and GetProcessPriority to retrieve the priority level and the priority class of a process in a sequence. In the following photo, you can see a binary called these APIs to specify the current priority level of a process and a thread.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xwUKAeza--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/p871qtoksbp6610ojbbt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xwUKAeza--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/p871qtoksbp6610ojbbt.png" alt="3"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Threads are scheduled in a round-robin fashion at each priority level, and only when there are no executable threads at a higher level will scheduling of threads at a lower level take place.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;For a table that shows the base-priority levels for each combination of priority class and thread priority value, refer to the slide 3 photo of this article.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;So the rest of this section explores the required context to properly prioritize I/O requests. However, threads are scheduled to run based on their scheduling priority. Each thread is assigned a scheduling priority.&lt;/p&gt;

&lt;p&gt;As you can see in the above slide, the priority levels range from zero (lowest priority) to 31 (highest priority). Only the zero-page thread can have a priority of zero. The zero-page thread is a system thread responsible for zeroing any free pages when there are no other threads that need to run.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dYf63m1Q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/72ubjkxrsc2kd6doa3dw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dYf63m1Q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/72ubjkxrsc2kd6doa3dw.png" alt="4"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The purpose of I/O prioritization is to improve system responsiveness without significantly decreasing overall throughput. System advances have often focused on improving the performance of the CPU to improve the work-throughput capabilities of the system.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Input and output (I/O) devices allow us to communicate with the computer system. I/O is the transfer of data between primary memory and various I/O peripherals. Input devices such as keyboards, mice, card readers, scanners, voice recognition systems, and touch screens enable us to enter data into the computer. Output devices such as monitors, printers, plotters, and speakers allow us to get information from the computer.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I/O devices have also focused on improvements to throughput. However, the largest performance bottleneck for media or storage based devices is armature seek time, which is often measured in milliseconds.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1k1rCeBF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/0abx1r38ksnn3swq6vhc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1k1rCeBF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/0abx1r38ksnn3swq6vhc.png" alt="5"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, it is easy to see how low-priority threads might be capable of flooding a device with I/O requests that starve the I/O requests of a higher-priority thread. Like the Windows thread scheduler, which is responsible for maintaining the balance among threads that are scheduled for the CPU, the I/O subsystem must take on the responsibility of maintaining the same kind of balance for I/O requests in the system.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;For both thread scheduling and I/O scheduling, the balance is driven not only by the need to optimize throughput but by the need to ensure an acceptable level of responsiveness to the user.&lt;br&gt;
When optimizing for more than just throughput, throughput might be sacrificed in favor of quickly completing the I/O request for which a user is waiting.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;When responsiveness is considered, the user’s I/O is given higher priority. This causes the application to be more responsive, even though overall I/O throughput might decrease.&lt;/p&gt;

&lt;p&gt;If the system thread’s I/O is serviced first at the cost of the application’s ability to make progress, the user notices the system as slower, even though throughput is actually higher.&lt;/p&gt;

&lt;h1&gt;
  
  
  I/O Access Patterns
&lt;/h1&gt;

&lt;p&gt;Also, in the following photo, you will see a complete categorization of I/O access patterns but I will simply explain the whole concept in the following paragraphs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PYxlYBwZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/fgc4tyy3kjz2i0qdeht4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PYxlYBwZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/fgc4tyy3kjz2i0qdeht4.png" alt="6"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To ensure that throughput is not sacrificed more than is required to maintain responsiveness, I/O access patterns must be considered. The reasons for copying data from a device into memory can be categorized into a few simple scenarios:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The OS might copy in binary files to be executed, or it might copy in data that executable programs might need. An example would be launching Microsoft Word.
2.An application might open a data file for use in its completeness. An example would be loading a Microsoft Word document so the user can edit the document.
3.The file system might read in or write file system metadata when a file is created, deleted, moved, or otherwise modified. An example would be creating a new Word document.&lt;/li&gt;
&lt;li&gt;A background task might attempt to do work that is not time-critical and should not interfere with the user’s foreground tasks. An example would be antivirus software that is scanning files in the background.&lt;/li&gt;
&lt;li&gt;An application might open a data file for use as a stream. An example would be playing a song in VLC Player.&lt;/li&gt;
&lt;/ol&gt;

&lt;h1&gt;
  
  
  Loading Access Pattern
&lt;/h1&gt;

&lt;p&gt;Most I/O access falls into the pattern of the atomic load before use, or paging, gather and flush. Scenarios 1 to 3 fall into this category.&lt;/p&gt;

&lt;p&gt;For example, a program that is to be executed must first be loaded into memory. For this to happen, a set of I/O must be completed as a group before execution can start. The system might use techniques to page out parts of an executable for the sake of limited system resources, but it does so by applying the atomic load before use rule to subsections.&lt;/p&gt;

&lt;p&gt;After a program is running, it often performs tasks on a data file. An example is Microsoft Word, which performs tasks on .doc files. Loading a data file involves another set of I/O that must be completed as a group before the user can modify the file. Saving the modified file back to the device involves yet another set of I/O that must be completed as a group. In this way, applications that load data files follow the atomic load before use the pattern on their data files.&lt;/p&gt;

&lt;p&gt;Finally, when the file system updates its metadata because of actions that the user performs on the system, the file system must also atomically read or write its metadata before it may proceed with other operations that depend on that metadata.&lt;br&gt;
All of these scenarios follow the same access pattern. However, depending on the user’s focus at any given time, the urgency of each set of I/O operations might change.&lt;/p&gt;

&lt;p&gt;Additionally, I/O processing of threads may depend on each other and require some I/O to complete before other I/O can be started. File system I/O often finds itself in this situation. Improving the responsiveness of the system requires a method to ensure that I/O is completed in a certain order.&lt;/p&gt;
&lt;h1&gt;
  
  
  Passive Access Pattern
&lt;/h1&gt;

&lt;p&gt;A variation on atomic load before use is an application that is working to accomplish a set of I/O operations but grasps that it is not the focus of the user and consequently should not interfere with the responsiveness of what the user is working on. Scenario 4 falls into this category.&lt;/p&gt;

&lt;p&gt;System processes that operate in the background often find themselves in this role. A background defragmenter is an example of such a service. A defragmented device has better responsiveness than a fragmented storage device because it requires fewer costly queries to accomplish file reads and writes.&lt;/p&gt;

&lt;p&gt;However, it would be counterproductive to cause the user’s application to become less responsive due to the large number of I/O requests that the defragmenter is creating.&lt;/p&gt;

&lt;p&gt;Passive access patterns are used by applications whose tasks are often considered noncritical maintenance. Fundamentally, this means that the applications are not supposed to finish a task as soon as possible because their tasks are always ongoing. Such applications are not required to be responsive and must have a way to allow activities that require responsiveness to proceed unimpeded.&lt;/p&gt;
&lt;h1&gt;
  
  
  Streaming Access Pattern
&lt;/h1&gt;

&lt;p&gt;The opposite of the atomic load before the use pattern is the streaming I/O pattern. In this kind of access, the application does not require all of the I/O in a set to be completed before it can begin its task; it can begin processing data from the completed I/O in parallel with retrieving the next set of data.&lt;br&gt;
The application requires the I/O to be accomplished in a specific order, and it requires the process to be acceptably responsive, potentially within real-time limits. Scenario 5 falls into this category.&lt;/p&gt;

&lt;p&gt;An example of an application that uses a streaming I/O access pattern is Windows Media Player. For this type of application, the purpose of the application is to progress through the I/O set. Additionally, many media applications can compensate for missing or dropped frames of data.&lt;/p&gt;

&lt;p&gt;For this reason, a device that puts forth a large effort to accomplish a read might be taking the worst course of action because it holds up all other I/O and causes glitches in the media playback.&lt;/p&gt;
&lt;h1&gt;
  
  
  How can we change the priority?
&lt;/h1&gt;

&lt;p&gt;Aside from using the documented background mode priority class for SetPriorityClass, you can manually tweak the I/O priority of a running process via the NT Native APIs.&lt;/p&gt;

&lt;p&gt;Retrieving the process I/O priority is simple. If you know how to call any API, you can call these APIs. There is documentation on the web about them, and the below paragraph will summarize their usage in setting and retrieving I/O priorities.&lt;/p&gt;

&lt;p&gt;NtQueryInformationProcess and NtSetInformationProcess are the NT Native APIs to get and set different classes of process information, respectively. For retrieval, you simply specify the type (class) of information and the size of your input buffer.&lt;br&gt;
It returns the requested information or the needed size of the buffer to get that information. The Set function works similarly, except you know the size of data you are passing to it. However, in the following sections, we will discuss the I/O prioritization strategies that correspond to the access patterns that were described earlier.&lt;/p&gt;
&lt;h2&gt;
  
  
  Hierarchy Prioritization Strategy
&lt;/h2&gt;

&lt;p&gt;The atomic transfer before use scenario that was described earlier can be addressed by a mechanism that marks an I/O set in a transfer for preferential treatment when the I/O set is being processed in a queue.&lt;/p&gt;

&lt;p&gt;A hierarchy prioritization strategy effectively allows marked I/O to be sorted before it is processed. This strategy involves several levels of priority that can be associated with I/O requests and thus can be handled differently by drivers that see the requests. Windows Vista currently uses the following priorities: critical (memory manager only), high, normal, and low.&lt;/p&gt;

&lt;p&gt;Before Windows Vista, all I/O was treated equally and can be thought of as being marked as a normal priority. With hierarchy prioritization, I/O can be marked as high priority so that it is put at the front of the queue. This strategy can take on finer granularity, and other priorities such as low or critical can be added. In this strategy, I/O is processed as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;All critical-priority I/O must be processed before any high-priority I/O.&lt;/li&gt;
&lt;li&gt;All high-priority I/O must be processed before any normal-priority I/O.&lt;/li&gt;
&lt;li&gt;All normal-priority I/O must be processed before any low-priority I/O.&lt;/li&gt;
&lt;li&gt;All low-priority I/O is processed after all higher priority I/O.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For a hierarchy prioritization strategy to work, all layers within the I/O subsystem must recognize and implement priority handling in the same way. If any layer in an I/O subsystem diverges in its handling of priority, including the hardware itself, the hierarchy prioritization strategy is at risk of being rendered ineffective.&lt;/p&gt;
&lt;h2&gt;
  
  
  Idle Prioritization Strategy
&lt;/h2&gt;

&lt;p&gt;The non-time-critical I/O scenario that was described earlier in this paper can be addressed by a mechanism that marks the set of I/O in a transfer to yield to all other I/O when they are being processed in a queue. Idle prioritization effectively forces the marked I/O to go to the end of the line.&lt;/p&gt;

&lt;p&gt;The idle strategy marks an I/O as having no priority. All I/O that has a priority is processed before a no-priority I/O. When this strategy is combined with the hierarchy prioritization strategy, all of the hierarchy priorities are higher than the non-priority I/O.&lt;/p&gt;

&lt;p&gt;Because all prioritized I/O goes before no-priority I/O, there is a very real possibility that a very active I/O subsystem could starve the no-priority I/Os. This can be solved by adding a trickle-through timer that monitors the no-priority queue and processes at least one non-priority I/O per unit of time.&lt;br&gt;
For an idle prioritization strategy to work, only one layer within the I/O subsystem must recognize and implement the idle strategy. After a no-priority I/O has been released from the no-priority queue, the I/O is treated as a normal-priority I/O.&lt;/p&gt;
&lt;h2&gt;
  
  
  Bandwidth-Reservation Strategy
&lt;/h2&gt;

&lt;p&gt;The streaming scenario described earlier in this paper can be addressed by a mechanism that reserves bandwidth within the I/O subsystem for use by a thread that is creating I/O requests. A bandwidth-reservation strategy effectively gives a streaming application the ability to negotiate a minimum acceptable throughput for I/O that is being processed.&lt;/p&gt;

&lt;p&gt;A bandwidth reservation is a request from an application for a certain amount of guaranteed throughput from the storage subsystem. Bandwidth reservations are extremely useful when an application needs a certain amount of data per period of time (such as streaming) or in other situations where the application might do bursts of I/O and require a real-time guarantee that the I/O will be completed in a timely fashion.&lt;/p&gt;

&lt;p&gt;The bandwidth-reservation strategy uses frequency as its priority scheme. This allows applications to ask for time slices, such as three I/Os every 50 ms, within the I/O subsystem. When coupled with the hierarchy prioritization strategy, streaming I/O gets the same minimum number of I/Os per unit of time, independent of the mix of critical‑, high‑, normal‑, and low-priority I/Os that are occurring in the system at the same time.&lt;/p&gt;

&lt;p&gt;For this strategy to work, only one layer within the I/O subsystem must recognize and implement the bandwidth reservation. Ideally, this layer should be as close as possible to the hardware. After an I/O has been released from the streaming queue, it is treated as a normal-priority I/O.&lt;/p&gt;
&lt;h1&gt;
  
  
  Implementing Prioritization in Applications
&lt;/h1&gt;

&lt;p&gt;Applications can use several Microsoft Win32 functions to take advantage of I/O prioritization. This section gives a brief overview of the functions that are available and discusses some potential usage patterns. Developers should consider the following when adjusting application priorities:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Whenever an application modifies its priority, it risks potential issues with priority inversion. If an application sets itself or a particular thread in the application to run at a very low priority while holding a shared resource, it can cause threads that are waiting on that resource with higher priority to block much longer than they should.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Applications that use streaming should also be sensitive to causing starvation in other applications, although there is a hard limit on the amount of bandwidth that an application can reserve.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h1&gt;
  
  
  Setting the Priority for Hierarchy and Idle
&lt;/h1&gt;

&lt;p&gt;An application can request a lower-than-normal priority for I/O that it issues to the system. This means that the requests that the I/O subsystem generates on the application’s behalf contain the specified priority; at that point, the driver stack becomes responsible for deciding how to interpret the priority. Therefore, not all I/O requests that are issued with a low priority are, in fact, treated as such.&lt;/p&gt;

&lt;p&gt;Most applications use the process priority functions such as SetPriorityClass to request a priority. SetPriorityClass sets the priority class of the target process. Before Windows Vista, this function had no options to control I/O priority. Starting with Windows Vista, a new background priority class has been added. Two values control this class: the first sets the mode of the process to background and the second returns it to its original priority.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#include &amp;lt;windows.h&amp;gt;
#include &amp;lt;iostream&amp;gt;
int main(int argc, char* argv[])
{
 // Now all threads in a process will make low-priority I/O requests.
 SetPriorityClass(GetCurrentProcess(), PROCESS_MODE_BACKGROUND_BEGIN);
// Now primary thread issue low-priority I/O requests.
 SetThreadPriority(GetCurrentThread(), THREAD_MODE_BACKGROUND_BEGIN);
DWORD  dw_priority_class = GetPriorityClass(GetCurrentProcess());
 std::printf("Priority class is 0x%x\n", dw_priority_class);
DWORD  dw_priority_level = GetThreadPriority(GetCurrentThread());
 std::printf("Priority level is 0x%x\n", dw_priority_level);
SetPriorityClass(GetCurrentProcess(), PROCESS_MODE_BACKGROUND_END);
 SetPriorityClass(GetCurrentProcess(), THREAD_MODE_BACKGROUND_END);
return 0;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The following call starts background mode for the current process:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SetPriorityClass(GetCurrentProcess(),PROCESS_MODE_BACKGROUND_BEGIN);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The following call exits background mode:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SetPriorityClass(GetCurrentProcess(), ROCESS_MODE_BACKGROUND_END);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;While the target process is in background mode, its CPU, page, and I/O priorities are reduced. From an I/O perspective, each request that this process issue is marked with an idle priority hint (very low priority). A similar function for threads, SetThreadPriority, can be used to cause only specific threads to run at low priority.&lt;/p&gt;

&lt;p&gt;Finally, the SetFileInformationByHandle function can be used to associate a priority for I/O on a file-handle basis. In addition to the idle priority (very low), this function allows normal priority and low priority. Whether these priorities are supported and honored by the underlying drivers depends on their implementation (which is why they are referred to as hints).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FILE_IO_PRIORITY_HINT_INFO priorityHint;
priorityHint.PriorityHint = IoPriorityHintLow;
result = SetFileInformationByHandle(hFile, FileIoPriorityHintInfo, &amp;amp;priorityHint, sizeof(PriorityHint));
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Reserving Bandwidth for Streaming
&lt;/h1&gt;

&lt;p&gt;Applications that stream a lot of data, such as audio and video, often require a certain percentage of the bandwidth of the underlying storage system to deliver content to the user without glitches.&lt;/p&gt;

&lt;p&gt;The addition of bandwidth reservations, also known as scheduled file I/O (SFIO), to the I/O subsystem exposes a way for these applications to reserve a portion of the bandwidth of the disk for their usage.&lt;/p&gt;

&lt;p&gt;Applications can use the GetFileBandwidthReservation and SetFileBandwidthReservation functions to work with bandwidth reservations:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;BOOL
WINAPI
GetFileBandwidthReservation(
__in  HANDLE  hFile,
__out LPDWORD lpPeriodMilliseconds,
__out LPDWORD lpBytesPerPeriod,
__out LPBOOL  pDiscardable,
__out LPDWORD lpTransferSize,
__out LPDWORD lpNumOutstandingRequests
);
BOOL
WINAPI
SetFileBandwidthReservation(
__in  HANDLE  hFile,
__in  DWORD   nPeriodMilliseconds,
__in  DWORD   nBytesPerPeriod,
__in  BOOL    bDiscardable,
__out LPDWORD lpTransferSize,
__out LPDWORD lpNumOutstandingRequests
);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;An application that requires a throughput of 200 bytes per second from the disk would make the following call:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;result = SetFileBandwidthReservation(hFile, 1000, 200, FALSE, &amp;amp;transfer_size, &amp;amp;outstanding_requests);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The values that are returned in transfer_size and outstanding_requests tell the application the size and number of requests with which they should try to saturate the device to achieve the desired bandwidth.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>windows</category>
      <category>cpp</category>
      <category>winapi</category>
    </item>
    <item>
      <title>Good, Bad, Ugly in Concurrent Programming with C++</title>
      <dc:creator>Milad Kahsari Alhadi</dc:creator>
      <pubDate>Thu, 20 Feb 2020 13:14:44 +0000</pubDate>
      <link>https://forem.com/clightning/good-bad-ugly-in-concurrent-programming-with-c-30ke</link>
      <guid>https://forem.com/clightning/good-bad-ugly-in-concurrent-programming-with-c-30ke</guid>
      <description>&lt;p&gt;As you know, We can write and develop programs with C++ by different variants of paradigms like procedural, object-oriented, functional and also concurrency.&lt;/p&gt;

&lt;p&gt;Today, I wanted to discuss why we should care about using std::async when we want to develop software with the concurrency paradigm (especially multithreaded approach of concurrency, not multiprocessing).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F0dd9fflprmvpj6q4haqn.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F0dd9fflprmvpj6q4haqn.PNG" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When we want to develop a program which their threads are executing concurrently, we can use thread, async, packaged_task and … that all of them have cons and pros.&lt;/p&gt;

&lt;p&gt;As I realized until now when concurrent programming, std::thread is a good worker, std::async is a bad worker and std::packaged_task is an ugly worker but why? Consider we want to develop a program that has to run two tasks (functions) concurrently.&lt;/p&gt;

&lt;p&gt;If we wanted to use std::thread to develop such software, we should be aware of the fact std::thread does not provide an easy way to return a value from a thread.&lt;/p&gt;

&lt;p&gt;We could get the return value of the tasks | functions via references, variable pointers or global variables but these approaches share data between multiple threads and look a bit cumbersome.&lt;/p&gt;

&lt;p&gt;Another approach would be to use a condition variable. The condition variable is associated with a condition and synchronizes threads when the condition is fulfilled but using the condition variable for returning from a thread seems like a big overhead.&lt;/p&gt;

&lt;p&gt;Fortunately, the STL provides a mechanism to return from a thread without using condition variables. The solution is to use std::future which provides a mechanism to access the result of asynchronous operations:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;An asynchronous operation (created via std::async, std::packaged_task, or std::promise) can provide a std::future object to the creator of that asynchronous operation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The creator of the asynchronous operation can then use a variety of methods to query, wait for, or extract a value from the std::future. These methods may block if the asynchronous operation has not yet provided a value.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When the asynchronous operation is ready to send a result to the creator, it can do so by modifying shared state (e.g. std::promise::set_value) that is linked to the creator's std::future.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Note that std::future references shared state that is not shared with any other asynchronous return objects (as opposed to std::shared_future). &lt;/p&gt;

&lt;p&gt;However, in the first step, I wanted to use std::thread without considering return-value-get issue of concurrent functions and also the existence of std:future solution.&lt;/p&gt;

&lt;p&gt;With std::thread we can run tasks concurrently but it has some limitations like we can't reach the return value of the functions, we may face data races | race condition and also other low-level multithreading issues. &lt;/p&gt;

&lt;p&gt;In the following photo, you see the functions (tasks) which I wanted to execute them in different threads:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fiw2ebgbz9uv4buraq48q.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fiw2ebgbz9uv4buraq48q.PNG" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With std::thread and also using join mechanisms (not detach) we can run those functions concurrently. A C++ thread object but not always represents a thread of execution, which is an OS concept.&lt;/p&gt;

&lt;p&gt;When thread::join() is called, the calling thread will block until the thread of execution has completed. This is one mechanism that can be used to know when a thread has finished. When thread::join() returns, the OS thread of execution has completed and the C++ thread object can be destroyed.&lt;/p&gt;

&lt;p&gt;The thread::detach() is called, the thread of execution is detached from the thread object and is no longer represented by a thread object — they are two independent things.&lt;/p&gt;

&lt;p&gt;The C++ thread object can be destroyed and the OS thread of execution can continue on. If the program needs to know when that thread of execution has completed, some other mechanism needs to be used. join() cannot be called on that thread object anymore, since it is no longer associated with a thread of execution.&lt;/p&gt;

&lt;p&gt;It is considered an error to destroy a C++ thread object while it is still “joinable”. That is, in order to destroy a C++ thread object either join() needs to be called or detach() must be called.&lt;/p&gt;

&lt;p&gt;If a C++ thread object is still joinable when it’s destroyed, an exception will be thrown. However, in the following photo, you see how can we use std::thread to run Task1 and Task2 functions:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fu4nka26fhiqa6cal63tg.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fu4nka26fhiqa6cal63tg.PNG" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When we execute the program, we will get the following output which shows our program works without any issues and problems. Also, you will notice the scheduling pattern of Windows in order to run scheduled threads. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fhf3ydxmy6u2u8e98ly9n.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fhf3ydxmy6u2u8e98ly9n.PNG" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We have no problem here with std::thread because std::threads always give us such concurrent execution pattern output but when we want to execute functions which will return a value, std::thread has not provided a well-known and easy approach to get the return value of the functions.&lt;/p&gt;

&lt;p&gt;In that situation we should run tasks with std::async, which will return a std::future object that gives us the ability to get the return value of the functions via get member function of std::future object. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F1ehk4rhqex4gq1jru8js.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F1ehk4rhqex4gq1jru8js.PNG" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;However, what will happen if we want to execute the functions with std::async? I rewrite the program with std::async like the following one:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Flgmzg07lrmgh9npyg9j3.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Flgmzg07lrmgh9npyg9j3.PNG" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It is interesting for us because when we used std::async to run tasks concurrently, it didn’t give us a real concurrent execution result and also functions didn’t execute by different and independent threads.&lt;/p&gt;

&lt;p&gt;As you see, tasks run in the context of the same thread with ID 1268. It means one same thread executes Task1 and Task2. You can see the execution result of the program in the following photo:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fxcskegja9vs9gqn7t0jo.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fxcskegja9vs9gqn7t0jo.PNG" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;See? Task1 and Task2 have been executed by one thread but if we use std::future with by std::async, we get a different output from the previous one. For example, if we rewrite the program as the following one we will get different outputs again.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fu72bwb0yprbexuq1hgar.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fu72bwb0yprbexuq1hgar.PNG" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When we execute the program, we will get the following output from the program execution which is interesting for us again because it has a concurrent execution pattern right now. Main, Task1 and also Task2 have been executed by different threads. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F3ogdyxukt1pww6ny3fcg.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F3ogdyxukt1pww6ny3fcg.PNG" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;However, that’s weird. When we use std::future with by std::async, program acts concurrently, without std::future it acts asynchronously. This uncontrolled behaving can be a little dangerous in some situations. But wait, if we call std::async, with execution policy of std::launch::deferred, what we will get?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F3qx3ps49ww8rilynnnu1.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F3qx3ps49ww8rilynnnu1.PNG" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If we execute the program with std::launch::deferred (lazy evaluation), the program runs in a serialized flow (asynchronous) completely and gives us the following output:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fx12w5bxwcgyymmd4kp7u.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fx12w5bxwcgyymmd4kp7u.PNG" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You see the main, Task1, and also Task2 functions have been executed by one same thread. It means when we used deferred policy, the functions will be executed in the main context which is in some situations a preferred approach.&lt;/p&gt;

&lt;p&gt;However, the goal of this article was the fact: &lt;strong&gt;YOU SHOULD BE AWARE THESE EXECUTION PATTERNS OF THE ASYNC&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;And you will consider this execution policy in the coding and developing your concurrent software because of std::async act different when you used these launch policies especially when std::async runs by default execution policy of std::launch::async | std::launch::deferred. Also, I should mention here, when we use the default policy of the async, in the load time of the program, the system will specify the execution pattern of the program.&lt;/p&gt;

&lt;p&gt;However, if you want, your program will be taking advantage of the independent threads to execute tasks concurrently without conflict and uncontrolled behaving, you should use package_task which gives you the power to run tasks by std::thread and also getting the return value of the tasks with by std::future.&lt;/p&gt;

&lt;p&gt;In the following photo, you see the rewritten version of the program with a packaged_task. The std::packaged_task is one of the possible ways of associating a task with an std::future.&lt;/p&gt;

&lt;p&gt;The benefit of the packaged task is to decouple the creation of the future with the execution of the task. However, if you wanted to run a task with threads, you can use this solution explicitly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fceh2f62xtxeocbzgmv5h.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fceh2f62xtxeocbzgmv5h.PNG" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When we execute the program, we will get a full concurrent result because we used std::thread. Also, because packaged_task returns a future object, we can use it to retrieve the return value of tasks without any issues. The typical usage of a packaged task is:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;creating the packaged task with a function,&lt;/li&gt;
&lt;li&gt;retrieving the future from the packaged task,&lt;/li&gt;
&lt;li&gt;passing the packaged task elsewhere,&lt;/li&gt;
&lt;li&gt;invoking the packaged task.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fzdsiv9bzdoim9rk56z2w.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fzdsiv9bzdoim9rk56z2w.PNG" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is the difference you should care about them completely. The std::packaged_task decouples the creation of the future with the execution of the task. We learned how to benefit from this decoupling. We created tasks in one thread and we executed them in the other thread. &lt;/p&gt;

&lt;p&gt;Also, as I mentioned already, with packaged_task we can get the return value of a function easily besides running it with std::thread. In the following example, you can see how I could get the return value of a function and then print it out in the console. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F5n3fyie6q0hred1ucg8o.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F5n3fyie6q0hred1ucg8o.PNG" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Anyway, packaged_task, async, thread, future, promise and concurrent programming with C++ has a lot of stories that I mention a tiny part of it in this article.&lt;/p&gt;

&lt;p&gt;Also, in a conversation with Felix Petriconi, he said to me: in general, there is a big issue with std::async and std::future. The complete design is broken in many ways and the C++ committee is currently trying to fix it. We hope to get it in C++23.&lt;/p&gt;

&lt;p&gt;Nevertheless, for more information about multithreading, you should take a book and go through it although it has a lot of downs and ups. It is a wide and hot topic in software development and also software engineering.&lt;/p&gt;

</description>
      <category>cpp</category>
      <category>c</category>
      <category>multithreading</category>
      <category>concurrency</category>
    </item>
    <item>
      <title>How and why overloading, templates, and auto deduction were invented?</title>
      <dc:creator>Milad Kahsari Alhadi</dc:creator>
      <pubDate>Fri, 31 Jan 2020 20:08:23 +0000</pubDate>
      <link>https://forem.com/clightning/how-and-why-overloading-templates-and-auto-deduction-were-invented-1c86</link>
      <guid>https://forem.com/clightning/how-and-why-overloading-templates-and-auto-deduction-were-invented-1c86</guid>
      <description>&lt;p&gt;Why we need the functions/classes template and how/why this concept created by C++ compiler engineers? In this medium post, I want to go through the steps which made c++ experts think about the template concept and auto deduction in the modern days of software development.&lt;/p&gt;

&lt;p&gt;As you know, when we write programs with C language, our software is nothing more than tons of functions and structures. With C language, we should implement various functions for every task. For example, consider we want to implement a calculator with C language.&lt;/p&gt;

&lt;p&gt;We should implement multiple variants of the add or mul or div or … function for different data types such as int, float, double and etc. It is completely a nightmare to repeat the same code implementation and also choose unique names for each function.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Cf5k5lNm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/9i2o2lnfg165xb0hi913.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Cf5k5lNm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/9i2o2lnfg165xb0hi913.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For example, in the above program, we implement three different add functions for work with different inputs like integer, float and double correctly.&lt;/p&gt;

&lt;p&gt;If you notice carefully, the add function has the same implementation for even different data types. These three Add functions just have different names and get different data types, but their internal implementation is the same as each other completely.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8PZCPyoB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/0m6tkoobvmxmvc99ep88.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8PZCPyoB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/0m6tkoobvmxmvc99ep88.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The above photo represents the disassembled output of the add program that is generated by IDA pro. With the observation of the disassembled output, you can understand why we need a unique name for every function because if they have not a unique name we can not call them in our program correctly but however, as you may be understood, choose a unique name for every function can be a tedious task and if we choose a name carelessly, name conflict can ruin our program.&lt;/p&gt;

&lt;p&gt;So if we could implement a compile-time feature in order to name the different variants of the Add functions in the background automatically, our pain will be reduced significantly.&lt;/p&gt;

&lt;h1&gt;
  
  
  Overloading Feature of C++
&lt;/h1&gt;

&lt;p&gt;C++ with function overloading feature solved this problem of C completely. For example, in CPP we can use the following pattern to implement the add program with less than pain as C.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--au6XM5zH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/yoh7vlmtyh4ijqx7ydw1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--au6XM5zH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/yoh7vlmtyh4ijqx7ydw1.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can see the difference between the program which is written by C and C++? With overloading features, C++ gives this chance to us, we use just one name for all different implementation of the add function but how the compiler suite can manage this overloading in the background and how the linker can resolve these overloaded functions at runtime?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--G7EIxDDo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/rxhxkdxry0u8qzl3pp7y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--G7EIxDDo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/rxhxkdxry0u8qzl3pp7y.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you disassemble the add program which is written by C++, you will find out the C++ compiler gave a unique name to the add function in the machine/assembly level, although we use one name to implement add function in the source code level.&lt;/p&gt;

&lt;p&gt;For example, in the above photo, you can see Add function which gets integer input has been decorated with j__?Add@YAHH@Z name. This naming convention and also decoration make functions distinguishable from each other at the linking phase.&lt;/p&gt;

&lt;p&gt;As a result, when the linker encounter with add function which has been decorated like j__?Add@YAHH@z, it gives the opportunity to the linker to resolve the address of this function correctly without making any conflict with other variants of add function.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;I should mention this note here:&lt;/strong&gt; Functions, data, and objects in C++ programs are represented internally by their decorated names. A decorated name is an encoded string created by the compiler during the compilation of an object, data, or function definition. It records calling conventions, types, function parameters and other information together with the name. &lt;strong&gt;This name decoration, also known as name mangling, helps the linker find the correct functions and objects when linking an executable.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The decorated naming conventions have changed in various versions of Visual Studio, and can also be different on different target architectures. To link correctly with source files created by using Visual Studio, C and C++ DLLs and libraries should be compiled by using the same compiler toolset, flags, and target architecture.&lt;/p&gt;

&lt;h1&gt;
  
  
  Format of a C++ decorated name
&lt;/h1&gt;

&lt;p&gt;The decorated name for a C++ function contains the following information:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The function name.&lt;/li&gt;
&lt;li&gt;The class that the function is a member of if it is a member function. This may include the class that encloses the class that contains the function, and so on.
The namespace the function belongs to if it is part of a namespace.&lt;/li&gt;
&lt;li&gt;The types of function parameters.&lt;/li&gt;
&lt;li&gt;The calling convention.&lt;/li&gt;
&lt;li&gt;The return type of the function.
The function and class names are encoded in the decorated name. The rest of the decorated name is a code that has internal meaning only for the compiler and the linker.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Function Template
&lt;/h1&gt;

&lt;p&gt;However, do you think can we make the C++ compiler smarter? For example, can we implement a compile-time feature which gives us the ability we never need to reimplement an add function for different data types? Yes. C++ with template features can solve this problem too and makes software coding a lot simpler.&lt;/p&gt;

&lt;p&gt;With this feature, we can give a blueprint as templates to the C++ compiler which can generate an overloaded function based on parameters that we passed to the function template.&lt;/p&gt;

&lt;p&gt;As you may understand, Functions generated based on the blueprint of the templates rely on the overloading capability of the C++ compiler internally.&lt;/p&gt;

&lt;p&gt;These function templates are special functions that can operate with generic types. This allows us to create a function template whose functionality can be adapted to more than one type without repeating the entire code for each type.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ejV-flk6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/4dy9dl8nwpm4samp39s7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ejV-flk6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/4dy9dl8nwpm4samp39s7.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see in the above sample code, In C++ we used the template to reduce our efforts to reimplement a code for an operation for different data types. C++ compiler now can manage this task automatically in the background.&lt;/p&gt;

&lt;p&gt;A template parameter is a special kind of parameter that can be used to pass a type as the argument: just like regular function parameters can be used to pass values to a function, template parameters allow to pass also types to a function. These function templates can use these parameters as if they were any other regular type.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Njwvk7fQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/btu37vlzvtoz0lnh4mix.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Njwvk7fQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/btu37vlzvtoz0lnh4mix.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you see the disassembled output of the program, you will find out compiler generated a set of overload decorated functions for handle different type of parameters which we pass to the function template.&lt;/p&gt;

&lt;p&gt;In another word, when we use function template, we give a blueprint to the compiler which it can use that blueprint to generate a set of overloaded functions based on the data type of the input we give to it.&lt;/p&gt;

&lt;h1&gt;
  
  
  Auto Keyword and Auto Deduction
&lt;/h1&gt;

&lt;p&gt;But do you think we can make the C++ compiler smarter than before now? for example, it can be able to guess what is the data type of the return value of a function? Yes, we can do it. If we want to give this opportunity to the compiler that can specify the data type of the return value of a function itself, we can use the auto keyword now.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MAVdXoVN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/z7wbadfxf8xj9tqzq2mq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MAVdXoVN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/z7wbadfxf8xj9tqzq2mq.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The auto keyword, for variables, specifies that the type of the variable that is being declared will be deducted from its initializer automatically. for functions, specifies that the return type will be deduced from its return statements.&lt;/p&gt;

&lt;p&gt;As you can see now, our code becomes simpler and beautiful than before with the usage of template and auto deduction capability of the c++ compiler. I think these features of c++ make programming and software development with C++ more enjoyable than before.&lt;/p&gt;

&lt;p&gt;In this article, I wanted to discuss how overloading, template, and auto keyword idea invented which they made C++ programming more enjoyable than before.&lt;/p&gt;

</description>
      <category>c</category>
      <category>cpp</category>
      <category>programming</category>
      <category>stl</category>
    </item>
    <item>
      <title>Substitution Failure is Error and Not An Error.</title>
      <dc:creator>Milad Kahsari Alhadi</dc:creator>
      <pubDate>Fri, 31 Jan 2020 19:57:41 +0000</pubDate>
      <link>https://forem.com/clightning/substitution-failure-is-error-and-not-an-error-19jg</link>
      <guid>https://forem.com/clightning/substitution-failure-is-error-and-not-an-error-19jg</guid>
      <description>&lt;p&gt;I decided to write an article about the Substitution Failure for people who are looking for a clear and step by step introduction about it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; if you have found out errors or misconceptions in this article, please fix it.&lt;/p&gt;

&lt;h1&gt;
  
  
  Name Lookup:
&lt;/h1&gt;

&lt;p&gt;The first thing we should understand before we digging into the SFINAE concept is the Name Lookup mechanism. What is that?&lt;/p&gt;

&lt;p&gt;When we start to write a program, our final program is nothing more than lots of functions/methods/classes/structs or …, in which all of these components have not any meaning for the dynamic loader of the operating system except their addresses and also their opcodes/instructions. For example, consider the following simple function (It is just an example):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tmRO2_ko--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/bvpcbxgmysw9yppgylwj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tmRO2_ko--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/bvpcbxgmysw9yppgylwj.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If the Operating System (Windows) would execute this function, it needs the address of the DoingRandomness function after it gets load into the memory. In different operating systems, we have a different file format for the executable files in which the dynamic loader of the operating system can use the information of that file format to specify the exact location of functions and other essentials components.&lt;/p&gt;

&lt;p&gt;So the CPP compiler in the compilation/linking phase must specify the address of the Doing and other functions like main in the memory and fills Optional Header and other PE essential attributes with those addresses and offsets.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IJgkRXYg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/0zc25mmkkqtsnr1zrf8y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IJgkRXYg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/0zc25mmkkqtsnr1zrf8y.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But how compiler/linker does address resolution from a high-level point of view? Simply, It goes through a mechanism which is called Name Lookup / Address resolution.&lt;/p&gt;

&lt;p&gt;First, it discovers in what translation unit/namespace the function has been declared and then it goes there and calculates the address of the functions of the program and other components.&lt;/p&gt;

&lt;p&gt;Then it starts to fill PE essential attributes and tables like ImageBase attribute with the exact loading location of the EXE file in the memory and AddressOfEntryPoint attribute value which is the offset of the main function to the ImageBase value in the memory.&lt;/p&gt;

&lt;p&gt;For example, when the compiler fills PE attributes with those addresses and information dynamic loader of the Windows can use ImageBase + AddressOfEntryPoint to specify the exact location of the main function in the memory which is 0x00400000 + 0x0001140B = 0x0041140B.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DD0Man-U--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/k7foz1rkmuahg6dkyfl3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DD0Man-U--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/k7foz1rkmuahg6dkyfl3.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When the compiler specifies the address of the function in the memory, we can call it in the main function (or other places) and use it. Indeed, it is a simple process for those free functions that defined in global namespaces or even other namespaces but for template functions, everything is different and takes more effort. Simply we can summarize function lookup process of a free function in the following steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Name Lookup&lt;/li&gt;
&lt;li&gt;Address Resolution (ImageBase + Offset)&lt;/li&gt;
&lt;li&gt;Getting of Absolute Address.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Template Functions Issues
&lt;/h1&gt;

&lt;p&gt;As I said before everything is different when you come to template functions. For example when we define the following template function (as an example for representation purpose):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--X8WSU_-I--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/jeznuygct5gnqfgd5tgy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--X8WSU_-I--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/jeznuygct5gnqfgd5tgy.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;CPP Compiler suite must go through the following steps that can finally specify an address dedicated to a function which we can call it and use it proudly.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Name Lookup&lt;/li&gt;
&lt;li&gt;Argument Type Deduction&lt;/li&gt;
&lt;li&gt;Type Substitution&lt;/li&gt;
&lt;li&gt;Generate an Overload Sample of Function&lt;/li&gt;
&lt;li&gt;Add Overloaded Function to Overload Sets&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As a usual resolution process, when you declare a template function or even a free function, the compiler/linker must specify in what translation unit or namespace the function has been declared and defined.&lt;/p&gt;

&lt;p&gt;When it finds out in what unit/namespace it has been defined, in second steps it must be deduced what kind of argument type passed to the template function in the order it can generate an overloaded function of Doing based on template function of Doing. For example, consider the following one:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hbS451s4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/z494qil0894ar2s2eoy4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hbS451s4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/z494qil0894ar2s2eoy4.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When the compiler sees the above code, it deduced argument type passed to the function has an std::string type, so in the next step, it tries to substitute T type name of Doing template function with a const std::string&amp;amp; type, as the following example has shown:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Td1DypEf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/48obps3jxd4rld75n2cp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Td1DypEf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/48obps3jxd4rld75n2cp.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After type substitutions did successfully, the compiler can generate a free function as the above one and add it to an overloaded set of Doing function template. Finally, the compiler can calculate the absolute address of the function and we can call it and use it. But what will happen, if the programmer calls the following function?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9qlDf4WR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/tdoqti3uii7scysmk57m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9qlDf4WR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/tdoqti3uii7scysmk57m.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Yes, here is the nightmare. If we call the function as above one for template function instantiation, the compiler will fail in the type substitution step because there isn’t a match candidate for the above instantiation with Doing(std::string&amp;amp;, const char[19]).&lt;/p&gt;

&lt;p&gt;Simply type substitution step has been failed here. In other words, type substitution failure has happened here and the compiler will generate an error for it, however, substitution failure is an error here. right?&lt;/p&gt;

&lt;p&gt;When we write a program, we may encounter these kinds of problems/errors in which we can handle these errors/problems of metaprogramming through different techniques.&lt;/p&gt;

&lt;p&gt;For example, if substitution failure happens like this one, we can change the template function with the following implementation to handle the type substitution error when the instantiation of the template function.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--atDsMJVW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ff4fd3q4fttqramjjtma.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--atDsMJVW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ff4fd3q4fttqramjjtma.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Or we can use a template specialization, or other techniques available in metaprogramming to handle these kinds of issues but you know, there is some situation in which substitution failure is not an error for the compiler.&lt;/p&gt;

&lt;p&gt;So when we say SFINAE, we mean this thing. In other words, This rule applies during overload resolution of function templates: When substituting the explicitly specified or deduced type for the template parameter fails, the specialization is discarded from the overload set instead of causing a compile error. As a result, our program may generate the wrong result or output for us because it has the wrong implementation.&lt;/p&gt;

&lt;p&gt;Nevertheless, either we notice substitution has happened or not, it is just a failure in which we can handle it via a thousand ways, like template specialization, template overloading, enable_if … I hope, my notes about SFINAE have helped you to understand what is it whole about.&lt;/p&gt;

&lt;p&gt;Email: &lt;a href="mailto:m.kahsari@gmail.com"&gt;m.kahsari@gmail.com&lt;/a&gt;&lt;br&gt;
Site: &lt;a href="http://www.aiooo.ir"&gt;www.aiooo.ir&lt;/a&gt;&lt;/p&gt;

</description>
      <category>c</category>
      <category>cpp</category>
      <category>vscode</category>
    </item>
    <item>
      <title>How to write Clean, Beautiful and Effective C++ Code</title>
      <dc:creator>Milad Kahsari Alhadi</dc:creator>
      <pubDate>Fri, 31 Jan 2020 19:47:27 +0000</pubDate>
      <link>https://forem.com/clightning/how-to-write-clean-beautiful-and-effective-c-code-17kh</link>
      <guid>https://forem.com/clightning/how-to-write-clean-beautiful-and-effective-c-code-17kh</guid>
      <description>&lt;p&gt;As you know, a naming convention is a set of rules for choosing the character sequence to be used for identifiers which denote variables, types, functions, classes, objects and other entities in source code and documentation.&lt;/p&gt;

&lt;p&gt;The most important reason for using a naming convention is to reduce the effort needed to read and understand source code; also many companies have also established their own set of conventions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XjXxifrw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/phb6akzrl8ra4aa230xf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XjXxifrw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/phb6akzrl8ra4aa230xf.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this medium post, I want to share with you a set of simple and straightforward rules to write better, beautiful, effective and most readable CPP code in both Windows and Linux environment distinctly with a minor code modification because Linux and Windows has a different set of naming convention.&lt;/p&gt;

&lt;p&gt;In this framework, I provide a set of rules for the following entities (I will complete it in the future):&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Local variable&lt;/li&gt;
&lt;li&gt;Global variable&lt;/li&gt;
&lt;li&gt;Functions&lt;/li&gt;
&lt;li&gt;Classes&lt;/li&gt;
&lt;li&gt;Methods&lt;/li&gt;
&lt;li&gt;Fields&lt;/li&gt;
&lt;li&gt;Structure&lt;/li&gt;
&lt;li&gt;Arguments&lt;/li&gt;
&lt;li&gt;Objects&lt;/li&gt;
&lt;li&gt;Namespaces&lt;/li&gt;
&lt;li&gt;Templates&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I called this coding convention &lt;strong&gt;Milad CPP Syntex&lt;/strong&gt; (MCS) for future referring to them and also giving this chance to myself to working on it for the CPP committee and community.&lt;/p&gt;

&lt;h1&gt;
  
  
  Variables
&lt;/h1&gt;

&lt;p&gt;Based on my experience, specifying variables with the first character of their type and their storage classes is the best way to express them at the source code level.&lt;/p&gt;

&lt;p&gt;In the following example, this naming convention for variables has shown in Windows (Visual Studio) and Linux (NeoVim):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zCdpuOdQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/lyyi7eo7mwbs6snho3la.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zCdpuOdQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/lyyi7eo7mwbs6snho3la.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Because Microsoft uses the Camel notation for naming its native APIs and variables of Windows OS, we should use this notation too. In Linux, in contrast to Windows, we have a different approach to name entities.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2HDBrply--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/3jw3obtp8521q7k860pz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2HDBrply--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/3jw3obtp8521q7k860pz.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Linux is using a lowercase sequence of characters with the underline for naming variables or functions or other entities, so we should define our variables and other entities based on that naming convention for the sake of the compatibility with the environment itself. The above code is based on that standard which Linux is following.&lt;/p&gt;

&lt;h1&gt;
  
  
  Free Standing Functions
&lt;/h1&gt;

&lt;p&gt;When we are programming with the Procedural Paradigm in Modern CPP, functions, and structures are the most important entities for us.&lt;/p&gt;

&lt;p&gt;Naming functions and structure with a descriptive and self-contained name are so crucial for writing and also reading the CPP code in both Windows and Linux OS.&lt;/p&gt;

&lt;p&gt;However, since Windows kernel interfaces (APIs or COM Interfaces) like WriteConsole is named with Camel style and also an Uppercase character at the beginning of their definitions, and also C functions defined with a lowercase character and short descriptive like wcslen, we should use similar convention with Windows Interfaces for our user-defined free-standing functions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ijvGNgqV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/epv6yefi139rgyj58vfl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ijvGNgqV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/epv6yefi139rgyj58vfl.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So the best way to define a free-standing function in Windows is using the Camel notation with Uppercase character at the beginning like PrintMessage in the above example.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qMOjHN3---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/vfjlawt4ldxk3jyzztpb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qMOjHN3---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/vfjlawt4ldxk3jyzztpb.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In contrast to Windows, It is better we define user-defined functions in Linux with lowercase character and separated each word with an underline because everything in Linux from POSIX and SUS interfaces to C/C++ libraries is based on lowercase characters, underscore separation and short names.&lt;/p&gt;

&lt;h1&gt;
  
  
  Functions (and Methods) Arguments
&lt;/h1&gt;

&lt;p&gt;As you know, there is a subtle difference between the arguments variable and other types of variables. So we should make some distinction between arguments and other variables in our source code.&lt;/p&gt;

&lt;p&gt;It doesn’t matter in what environment we are writing cpp code, we should specify arguments with a prefix like arg_ or arg or other prefixes to make them as clear as possible like the following example.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kAcrRijz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/vybjsd5kja3pl8voxzjx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kAcrRijz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/vybjsd5kja3pl8voxzjx.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Same with Windows, we should follow this rule in Linux too. It makes our code self-descriptive, self-contained and also beautiful. Now we can simply read the code and interpret it without any confusion with other types of entities.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2HBxUKge--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/4wj0svyu5br5mmn4k3ke.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2HBxUKge--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/4wj0svyu5br5mmn4k3ke.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Structure and Classes
&lt;/h1&gt;

&lt;p&gt;There is a set of the difference between Classes and Structures in CPP language but the concept of them is similar to each other (At least from Object-Oriented Programming perspective).&lt;/p&gt;

&lt;p&gt;Nevertheless, when we are writing cpp code with OOP or Procedural paradigm, we need classes and structures respectively.&lt;/p&gt;

&lt;p&gt;So we should have a well-defined convention to define them because they are the most important entities from object-oriented and procedural paradigm programming and also software engineering/development with modern cpp.&lt;/p&gt;

&lt;p&gt;In order to define Classes and Structure blueprints, we should use UpperCase and Camel based definitions like the following example:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9GWZ056V--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/u1sh74p6ckathpnicahg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9GWZ056V--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/u1sh74p6ckathpnicahg.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Also, you should consider the method definitions in the structure blueprint example. In similar to the free-standing functions, we should define methods (Functions in the classes or structures) also with Uppercase character at their beginning and Camel notation.&lt;/p&gt;

&lt;p&gt;It is also recommended to define the object instance of the Information blueprint with a capital O character in Windows and o_ in Linux. In the following example, I define a class with the same concept of the above structure.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hSgBfudX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/g2eki6rc1bf6yakvanr1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hSgBfudX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/g2eki6rc1bf6yakvanr1.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In Linux, we should define the above code with a small modification in the methods definitions and also object instance. It is recommended we follow the Linux naming convention everywhere. Because of that, it is better in Linux we define methods like other functions with lowercase and underline.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--D3cRbmbM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/sokczbp4tb5biy6618c7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--D3cRbmbM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/sokczbp4tb5biy6618c7.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;However, with this notation, we can understand which is not a simple user-defined entity now. Also, you should take care of one important concept in naming fields when you are writing code with C++. Unfortunately, some programmer uses double underscore at the beginning of the name of their fields member which makes undefined behavior at practice because compiler uses __ at the beginning of their identifier. So, we should not define fields member with double underscores because:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The use of two sequential underscore characters ( __ ) at the beginning of an identifier, or a single leading underscores followed by a capital letter, is reserved for C++ implementations in all scopes. You should avoid using one leading underscore followed by a lowercase letter for names with file scope because of possible conflicts with current or future reserved identifiers.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Nevertheless, it is better we define fields of a class with their type and accessibility. For example, in the above example, I define all fields with pf (public field) to make a difference between class fields with other entities.&lt;/p&gt;

&lt;p&gt;Now, with simple observation, we can see the difference between each element in our source code. In addition, our code has now become more legible, also its simplicity has been greatly improved.&lt;/p&gt;

&lt;h1&gt;
  
  
  Templates
&lt;/h1&gt;

&lt;p&gt;Templates are the foundation of generic programming, which involves writing code in a way that is independent of any particular type. A template is a blueprint or formula for creating a generic class or a function. The library containers like iterators and algorithms are examples of generic programming and have been developed using the template concept.&lt;/p&gt;

&lt;p&gt;However, in most of the example code, documentation and … when someone going to define a template, it uses  or something like that. But it decreases the reading of the source code. We should use meaningful words rather than just T or Tr or …&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Zxj6Iewe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/z098e4sfmup74xsc4bm3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Zxj6Iewe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/z098e4sfmup74xsc4bm3.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Also, in Linux, we should use the same naming convention of template for our code. It makes our code so meaningful as possible as when we are involved with metaprogramming.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TOi4cgDD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/i2sf5jf6xr9h5cdlqk29.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TOi4cgDD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/i2sf5jf6xr9h5cdlqk29.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I used the above naming convention in my career since 2012. I think it is the most useful naming convention for professional and also a newbie to look at your code and comprehend it. If you have any comments, feel free and send it to me.&lt;/p&gt;

</description>
      <category>c</category>
      <category>cpp</category>
      <category>windowsdev</category>
      <category>linux</category>
    </item>
    <item>
      <title>Why PE needs Original First Thunk(OFT)?</title>
      <dc:creator>Milad Kahsari Alhadi</dc:creator>
      <pubDate>Fri, 31 Jan 2020 19:15:52 +0000</pubDate>
      <link>https://forem.com/clightning/why-pe-needs-original-first-thunk-oft-2c4p</link>
      <guid>https://forem.com/clightning/why-pe-needs-original-first-thunk-oft-2c4p</guid>
      <description>&lt;p&gt;Let me summarize a lot of things for you here. When you load a Library, for example, Milad.dll and then try to call a function from that like MPrint, dynamic loader of the windows operating system has to resolve the address of the MPrint function and then call it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MDXjiocZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/efuai5carmac7tsrip6j.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MDXjiocZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/efuai5carmac7tsrip6j.jpeg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;How can OS resolve the address of that function?&lt;/p&gt;

&lt;p&gt;Windows go through some really complicated stuff which I want to tell you those steps with a simple tongue. The dynamic loader of windows OS to resolve the address of the function in DLLs has to check Import Name Table (INT), Import Ordinal Table (IOT) and Import Address Table (IAT) table.&lt;/p&gt;

&lt;p&gt;These table pointed by AddressOfNames, AddressOfNamesOrdinal and AddressOfFunction member in Export directory a PE structure (DLLs).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--N2FepBLS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/2hw8o28zor7nzgzeak9b.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--N2FepBLS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/2hw8o28zor7nzgzeak9b.jpeg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After OS load Milad.dll in address space of target process with help of LoadLibrary, it’s going to fill INT, IOT and IAT table with their RVA in target address space of the process with GetProcAddress and doing some calculation.&lt;/p&gt;

&lt;p&gt;There is an array of Import Directory in the process structure that has OriginalFirstThunk, TimeDateStamp, ForwarderChain, Name, FirstThunk which these members point to some important addresses.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TtyHAyNS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/o6r2f1g70polkqrx1332.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TtyHAyNS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/o6r2f1g70polkqrx1332.jpeg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Name in Import Directory (Image_Import_Descriptor) pointed to the name of the DLL which process tries to call, in this example this DLL is Milad.dll.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;OriginalFirstThunk pointed to Import Name Table which includes Names of functions that exported by the Milad.Dll. Functions in this table have a unique index in which the loader takes that index and goes to the next step and reference to the Import Ordinal Table with that index and takes the value which there is into that index of Import Ordinal Table which It’s another integer value.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;FirstThunk is another important member that points to IAT. in the previous step dynamic loader takes an integer value via IOT. this value is an index number in which dynamic loader refers to IAT with that value. In this table, there is an address in index value which dynamic loader gets from INT-IOT. After these steps when dynamic loader finds out the correct address of the function, it puts that address to Import Address Table for MPrint function. So the process can call that function with its address.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This is a simple explanation for complicated stuff which loader does to resolve the address of the functions in DLLs via Name, OFT(INT) and FT(IAT) members in Image_Import_Descriptor.&lt;/p&gt;

</description>
      <category>c</category>
      <category>cpp</category>
      <category>windowsdevelopment</category>
      <category>security</category>
    </item>
    <item>
      <title>Why We should care about Floating-Point Numbers?</title>
      <dc:creator>Milad Kahsari Alhadi</dc:creator>
      <pubDate>Fri, 31 Jan 2020 19:09:17 +0000</pubDate>
      <link>https://forem.com/clightning/why-we-should-care-about-floating-point-numbers-3fde</link>
      <guid>https://forem.com/clightning/why-we-should-care-about-floating-point-numbers-3fde</guid>
      <description>&lt;p&gt;Since computer memory is limited, you cannot store numbers with infinite precision, no matter whether you use binary fractions or decimal ones: at some point, you have to cut off. But how much accuracy is needed? And where is it needed? How many integer digits and how many fraction digits?&lt;/p&gt;

&lt;p&gt;However, when you are going to write a program which It is dealing with floating numbers and making a decision based on floating values, you have to care about losing precision. As it is obvious, losing precision is like losing your mind and your decision making logic.&lt;/p&gt;

&lt;p&gt;For example, consider the following C++ code. I wrote this code with this assumption that if the reactor temperature reaches a value of around 2.56, the program will stop its activity and send a message to the stdout: Alert: Stop Working.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PsVHb0Qw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/c07ti1k8c4tom118uv94.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PsVHb0Qw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/c07ti1k8c4tom118uv94.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But when I compiled and then run the above code it will show the following result in the console. It is not the result that I expected to see but what is going on?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gujNp6Vu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/bvo90uxw86hvawizsi02.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gujNp6Vu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/bvo90uxw86hvawizsi02.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When I set a breakpoint on the nuclearReactorTemp variable and then watch the value of that variable. Amazingly, It has the following value. I set it with 2.56 but in the debug environment you see it lose precision and It has the following value:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TbUp6e60--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/yarvk6yp1iq8k0qeml3x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TbUp6e60--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/yarvk6yp1iq8k0qeml3x.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As I expected, the floating-point variable loses its precision and then program logic completely failed and this is a catastrophic scene in the time of critical software development. So if we want to correct the program, we have to enhance the accuracy of the variable. If we rewrite the program like the following sample, the program will work as we expected.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8xTXTg34--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ns22z3ssciyz44o1b74m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8xTXTg34--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ns22z3ssciyz44o1b74m.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Yeap. This simple example shows you if we didn’t care about floating-point numbers, the logic of our program will fail in the progress and made some catastrophic decisions.&lt;/p&gt;

&lt;p&gt;Email: &lt;a href="mailto:m.kahsari@gmail.com"&gt;m.kahsari@gmail.com&lt;/a&gt;&lt;br&gt;
Twitter: m.kahsari&lt;/p&gt;

</description>
      <category>cpp</category>
      <category>security</category>
      <category>vscode</category>
      <category>c</category>
    </item>
  </channel>
</rss>
