Hello Jaro,
What Andy says is right, im backing him up. Remember to not include that URL in the sitemap.
Also is a good moment to say that with the robots.txt you just tell google bot not no follow it, that differs from indexing it. There are cases where URLs are indexed instead of being "blocked" in the robots.txt.
The fine way stop google from indexing a certain URL would be adding the meta robots tag including a noindex atribute.
Here there a quote from the Webmaster central help forum in Google:
If you block a file from crawling and Google discovers a URL for that file on another site, it may still index the file using whatever information it can find, even though crawling is blocked. So robots.txt disallow does not necessarily stop something being indexed.
(in the ets answer, a note below the Disallow part)
Hope it's clarifying.
Best luck.
GR.