c++ - Why don't compilers make unsigned vs signed comparison safe? -


as know, code generate warning:

for (int = 0; < v.size(); ++i) 

the solution auto = 0u;, decltype(v.size()) or std::vector<int>::size_type pretend we're forced have both signed , unsigned value. compiler automatically cast int unsigned int (the actual type doesn't matter). using explicit cast, static_cast<unsigned int>(i) makes warning go away, bad because did same thing compiler did , silenced important warning!

the better solution is:

if ((i < 0) || (static_cast<unsigned int>(i) < v.size())) 

understandably, c "closer metal" , consequence more unsafe. in c++, there's no excuse this. c++ , c diverge (as have been doing many years), hundreds of improvements c++ have increased safety. highly doubt change hurt performance either.

is there reason why compilers don't automatically?

n.b: happen in real world. see vulnerability note vu#159523:

this vulnerability in adobe flash arises because flash passes signed integer calloc(). attacker has control on integer , can send negative numbers. because calloc() takes size_t, unsigned, negative number converted large number, big allocate, , result calloc() returns null causing vulnerability exist.

an important goal of c++ compatibility, , there's ton of code not compile if signed/unsigned mixing fatal error. see stroustrup’s c++ design goals in 1986.

your proposal adds comparison not present in source.

arguably c++11 case made more safe if used ranged-for , auto:

for (auto : v) 

Comments

Popular posts from this blog

java - Static nested class instance -

c# - Bluetooth LE CanUpdate Characteristic property -

JavaScript - Replace variable from string in all occurrences -