Well, I think the reason this is the case is because "qwerty = 4" is ambiguous... are you defining a new variable called "qwerty" or calling the setter? Ruby resolves this ambiguity by saying it will create a new variable, thus the "self." is required.
Here is another case where you need "self.":
class A
def test
4
end
def use_variable
test = 5
test
end
def use_method
test = 5
self.test
end
end
a = A.new
a.use_variable # returns 5
a.use_method # returns 4
As you can see, the access to "test" is ambiguous, so the "self." is required.
EDIT: Also, this is why the C# example is actually not a good comparison, because you define variables in a way that is unambiguous from using the setter... if you had defined a variable in C# that was the same name as the accessor, you would need to qualify calls to the accessor with "this." just like the ruby case.
@Mike Stone
Hi! I understand and appreciate the points you've made and your
example was great. Believe me when I say, if I had enough reputation,
I'd vote up your response. Yet we still disagree:
- on a matter of semantics, and
- on a central point of fact
First I claim, not without irony, we're having a semantic debate about the
meaning of 'ambiguity'.
When it comes to parsing and programming language semantics (the subject
of this question), surely you would admit a broad spectrum of the notion
'ambiguity'. Let's just adopt some random notation:
- ambiguous: lexical ambiguity (lex must 'look ahead')
- Ambiguous: grammatical ambiguity (yacc must defer to parse-tree analysis)
- AMBIGUOUS: ambiguity knowing everything at the moment of execution
(and there's junk between 2-3 too). All these categories are resolved by
gathering more contextual info, looking more and more globally. So when you
say,
"qwerty = 4" is UNAMBIGUOUS in C#
when there is no variable defined...
I couldn't agree more. But by the same token, I'm saying
"qwerty = 4" is un-Ambiguous in ruby
(as it now exists)
"qwerty = 4" is Ambiguous in C#
And we're not yet contradicting each other. Finally, here's where we really
disagree: Either ruby could or could not be implemented without any further
language constructs such that,
For "qwerty = 4," ruby UNAMBIGUOUSLY
invokes an existing setter if there
is no local variable defined
You say no. I say yes; another ruby could exist which behaves exactly like
the current in every respect, except "qwerty = 4" defines a new
variable when no setter and no local exists, it invokes the setter if one
exists, and it assigns to the local if one exists. I fully accept that I
could be wrong. In fact, a reason why I might be wrong would be interesting.
Let me explain.
Imagine you are writing a new OO language with accessor methods looking
like instances vars (like ruby & C#). You'd probably start with
conceptual grammars something like:
var = expr // assignment
method = expr // setter method invocation
But the parser-compiler (not even the runtime) will puke, because even after
all the input is grokked there's no way to know which grammar is pertinent.
You're faced which a classic choice. I can't be sure of the details, but
basically ruby does this:
var = expr // assignment (new or existing)
// method = expr, disallow setter method invocation without .
that is why it's un-Ambiguous, while and C# does this:
symbol = expr // push 'symbol=' onto parse tree and decide later
// if local variable is def'd somewhere in scope: assignment
// else if a setter is def'd in scope: invocation
For C#, 'later' is still at compile time.
I'm sure ruby could do the same, but 'later' would have to be at runtime, because
as ben points out you don't know until the statement is executed which case
applies.
My question was never intended to mean "do I really need the 'self.'?" or "what
potential ambiguity is being avoided?" Rather I wanted to know why was this
particular choice made? Maybe it's not performance. Maybe it just got the job
done, or it was considered best to always allow a 1-liner local to override a
method (a pretty rare case requirement) ...
But I'm sort of suggesting that the most dynamical language might be the one which
postpones this decision the longest, and chooses semantics based on the most contextual
info: so if you have no local and you defined a setter, it would use the setter. Isn't
this why we like ruby, smalltalk, objc, because method invocation is decided at runtime,
offering maximum expressiveness?