By avoiding synchronization work between servers, that may introduce bottlenecks as you scale out application server, avoids storing data locally and fetching everything fresh from the database layer for each request. This way no data needs to be synchronized between application servers (e.g. session data). Another example is sharding of databases, where different database servers hold different parts of the data, which means no synchronization work is needed between those servers.
So, the answer is that even in cases where there are no algorithmic bottlenecks to scaling out there are situations where scaling out is not possible, and these have to do with atomicity of data. If you are dealing with a set of data which cannot be sharded and must be shared across nodes to implement your logic (e.g. a non-divisible social graph), then your only option is scaling up: one big server that holds the whole dataset instead of a bunch of smaller ones that each hold a subsegment. These situations are however pretty rare, so in most cases software can be designed to be scaled out, provided the design takes this into account from the beginning.
Another thing to take into account is design cost.
Designing for scaling out increases the development cost. Database sharding for example requires additional application-level logic to deal with the shards. For that reason, it makes sense to consciously choose to scale up, especially at the DB layer. avari Solutions maintain these principles when deplying the architecture ensuring seamless transition across a scalable solution.