Asked  7 Months ago    Answers:  5   Viewed   26 times

In Stack Overflow question Redefining lambdas not allowed in C++11, why?, a small program was given that does not compile:

int main() {
    auto test = []{};
    test = []{};

The question was answered and all seemed fine. Then came Johannes Schaub and made an interesting observation:

If you put a + before the first lambda, it magically starts to work.

So I'm curious: Why does the following work?

int main() {
    auto test = +[]{}; // Note the unary operator + before the lambda
    test = []{};

It compiles fine with both GCC 4.7+ and Clang 3.2+. Is the code standard conforming?



Yes, the code is standard conforming. The + triggers a conversion to a plain old function pointer for the lambda.

What happens is this:

The compiler sees the first lambda ([]{}) and generates a closure object according to §5.1.2. As the lambda is a non-capturing lambda, the following applies:

5.1.2 Lambda expressions [expr.prim.lambda]

6 The closure type for a lambda-expression with no lambda-capture has a public non-virtual non-explicit const conversion function to pointer to function having the same parameter and return types as the closure type’s function call operator. The value returned by this conversion function shall be the address of a function that, when invoked, has the same effect as invoking the closure type’s function call operator.

This is important as the unary operator + has a set of built-in overloads, specifically this one:

13.6 Built-in operators [over.built]

8 For every type T there exist candidate operator functions of the form

    T* operator+(T*);

And with this, it's quite clear what happens: When operator + is applied to the closure object, the set of overloaded built-in candidates contains a conversion-to-any-pointer and the closure type contains exactly one candidate: The conversion to the function pointer of the lambda.

The type of test in auto test = +[]{}; is therefore deduced to void(*)(). Now the second line is easy: For the second lambda/closure object, an assignment to the function pointer triggers the same conversion as in the first line. Even though the second lambda has a different closure type, the resulting function pointer is, of course, compatible and can be assigned.

Tuesday, June 1, 2021
answered 7 Months ago

The answers here are good, but they are missing an important point. Let me try and describe it.

R is a functional language and does not like to mutate its objects. But it does allow assignment statements, using replacement functions:

levels(x) <- y

is equivalent to

x <- `levels<-`(x, y)

The trick is, this rewriting is done by <-; it is not done by levels<-. levels<- is just a regular function that takes an input and gives an output; it does not mutate anything.

One consequence of that is that, according to the above rule, <- must be recursive:

levels(factor(x)) <- y


factor(x) <- `levels<-`(factor(x), y)


x <- `factor<-`(x, `levels<-`(factor(x), y))

It's kind of beautiful that this pure-functional transformation (up until the very end, where the assignment happens) is equivalent to what an assignment would be in an imperative language. If I remember correctly this construct in functional languages is called a lens.

But then, once you have defined replacement functions like levels<-, you get another, unexpected windfall: you don't just have the ability to make assignments, you have a handy function that takes in a factor, and gives out another factor with different levels. There's really nothing "assignment" about it!

So, the code you're describing is just making use of this other interpretation of levels<-. I admit that the name levels<- is a little confusing because it suggests an assignment, but this is not what is going on. The code is simply setting up a sort of pipeline:

  • Start with dat$product

  • Convert it to a factor

  • Change the levels

  • Store that in res

Personally, I think that line of code is beautiful ;)

Saturday, June 5, 2021
answered 6 Months ago

Update: as promised by the Core chair in the bottom quote, the code is now ill-formed:

If an identifier in a simple-capture appears as the declarator-id of a parameter of the lambda-declarator's parameter-declaration-clause, the program is ill-formed.

There were a few issues concerning name lookup in lambdas a while ago. They were resolved by N2927:

The new wording no longer relies on lookup to remap uses of captured entities. It more clearly denies the interpretations that a lambda's compound-statement is processed in two passes or that any names in that compound-statement might resolve to a member of the closure type.

Lookup is always done in the context of the lambda-expression, never "after" the transformation to a closure type's member function body. See [expr.prim.lambda]/8:

The lambda-expression's compound-statement yields the function-body ([dcl.fct.def]) of the function call operator, but for purposes of name lookup, […], the compound-statement is considered in the context of the lambda-expression. [ Example:

struct S1 {
  int x, y;
  int operator()(int);
  void f() {
    [=]()->int {
      return operator()(this->x+y);  // equivalent to: S1::operator()(this->x+(*this).y)
                                     // and this has type S1*

end example ]

(The example also makes clear that lookup does not somehow consider the generated capture member of the closure type.)

The name foo is not (re)declared in the capture; it is declared in the block enclosing the lambda expression. The parameter foo is declared in a block that is nested in that outer block (see [basic.scope.block]/2, which also explicitly mentions lambda parameters). The order of lookup is clearly from inner to outer blocks. Hence the parameter should be selected, that is, Clang is right.

If you were to make the capture an init-capture, i.e. foo = "" instead of foo, the answer would not be clear. This is because the capture now actually induces a declaration whose "block" is not given. I messaged the core chair on this, who replied

This is issue 2211 (a new issues list will appear on the site shortly, unfortunately with just placeholders for a number of issues, of which this is one; I'm working hard to fill in those gaps before the Kona meeting at the end of the month). CWG discussed this during our January teleconference, and the direction is to make the program ill-formed if a capture name is also a parameter name.

Wednesday, June 23, 2021
answered 6 Months ago

Your mutable version is fine:

T& operator[](T u);

but the const version should be a const member function as well as returning a const reference:

const T& operator[](T u) const;

This not only distinguishes it from the other overload, but also allows (read-only) access to const instances of your class. In general, overloaded member functions can be distinguished by their parameter types and const/volatile qualifications, but not by their return types.

Friday, July 30, 2021
answered 4 Months ago

there's a function print

when you call it like this


that effectively gets translated to this, ...args)

When you just pull it out

const going = some.print

You've just gotten a reference to the standalone function so calling it with


Is the same as calling


It's the . between some and print that is magically passing some as this to print. going() has no period so no this is passed.

Note that you could assign going to an object and use the period operator to do the magic "pass the thing on the left as this" operation

function print() {

const someObj = {
  foo: print,

all the class keyword does is help assign print to Something's prototype

class Something {
  print() { console.log(this); }

is the same as

function Something() {}  // the constructor
Something.prototype.print = function() { console.log(this} );

which is also effectively the same as this

function print() { console.log(this); }
function Something() {}  // the constructor
Something.prototype.print = print;

Saikat showed using bind. Bind effectively makes a new function that wraps your old function. Example

const going = print.bind(foo);

Is nearly the same as

function createAFuncitonThatPassesAFixedThis(fn, objectToUseAsThis) {
  return function(...args) {
    return, ...args);

const going = createAFunctionThatPassesAFixedThis(print, foo);
going();  // calls print with 'foo' as this

Some of the other answers have appear to have a fundamental mis-understanding of this. this is not the "owner" nor is this the current object. this is just effectively another variable inside a function. It gets set automatically if you use the . operator with a function call. so a . b() sets this to a. if the dot operator did not exist you could still set this by using call or apply as in, ...args) or somefunction.apply(valueForThis, [...args]);

function print() {
}{name: "test"});  // prints test

const foo = {
  name: "foo",
  bar: print,
};;  // prints foo

function Something(name) { = name;
Something.prototype.someFunc = print;

const s = new Something("something");
s.someFunc();  // prints something

s.foobar = print;
s.foobar();   // prints something

Also note that ES6 added the => arrow operator which binds this to whatever it was when the function is created. In other words

const foo = () => { console.log(this); }

Is the same as

const foo = function() { console.log(this); }.bind(this);

it should also be clear that both functions made with bind and arrow functions you can not change this using call or apply since effectively they made a wrapper that always sets this to what it was when the wrapper was made just like createAFuncitonThatPassesAFixedThis did above.

Let's add some comments to show what I mean

function createAFuncitonThatPassesAFixedThis(fn, objectToUseAsThis) {

  // return a function that calls fn with objectToUsAsThis as this
  return function(...args) {

    // when we arrive here from the "" line below
    // `this` will be "someObject"
    // but on the next line we're passing "objectToUseAsThis" to fn as this
    // so "someObject" is effectively ignored.

    return, ...args);

const going = createAFunctionThatPassesAFixedThis(print, foo);;  // pass some object as `this`

One more thing. It's very common to use bind to make a function for an event listener or callback. It's very common in React. Example

class Component extends React.Component {
  constructor(props) {
    this.handleClick = this.handleClick.bind(this);
  handleClick() {
    // 'this' will be the current component since we bound 'this' above
    console.log("was clicked");
  render() {
    return (<button onClick={this.handleClick}>click me</button>);

This would be handled in at least 3 common ways.

  1. Using bind in the constructor (see above)

  2. Using bind in render

      render() {
        return (<button onClick={this.handleClick.bind(this)}>click me</button>);
  3. Using arrow functions in render

      render() {
        return (<button onClick={() => { this.handleClick(); }}>click me</button>);

Above I showed what bind and => actually do with createAFuncitonThatPassesAFixedThis. They create a new function. So that should make it clear that if use style 2 or 3 above when every single time render is called a new function is created. If you use style 1 then a new function is only created in the constructor. It's a style issue as to whether or not that's important. Creating more functions means more garbage which means a possibly slower webpage but in most cases unless you have a crazy complicated webpage that is rendering constantly it probably doesn't matter which of the 3 ways you do it.

Wednesday, September 1, 2021
answered 3 Months ago
Only authorized users can answer the question. Please sign in first, or register a free account.
Not the answer you're looking for? Browse other questions tagged :