What is actually going on behind the scenes in SQL server when you SELECT * is an initial query to the master DB system tables to get references to all the columns in your specified table which is then used to build the explicit list of columns for the query you meant.
Now, I expect this process is more than just a bit optimized - SELECT * is after all a pretty ubiquitous shortcut, but it is still a shortcut and hence has a cost, and it's one of the first things any performance-obsessed DBA (i.e all of 'em!) will tell you to stop doing.
As to why just getting the ID column is quicker, that's because for most common SQL server DB designs an ID/autonumber field serves as both the primary key and the clustered index for that table, which is going to get massively optimized as well.
I find for any complex queries across very large data sets the only acceptable solution is to first fetch the set of IDs/keys that match the filter criteria, stick them in a temporary table and then join that on the named list of specific columns you need to SELECT from the original table, and, more likely, any other related ones too.
And ideally this should all be done within a stored procedure so you get the benefits of pre-compilation of the SQL query plan as well as not having to make multiple network calls from web to DB server.
I have no idea whether LINQ supports the column/field mapping + editor integration with stored procedure outputs, but if it doesn't I won't even consider it.