I've been working on a class in an iOS app, that displays a series of slides with one large image per slide. The user has the ability to swipe left or right and navigate to a new slide. The images were large enough that there was a noticeable lag. After much research I discovered CATiledLayers.
In short, a CATiledLayer will appropriately blend sections of an image together while a user is zooming and panning an image. Only small sections, or tiles, of the image are loaded into memory and onto the screen. This saves a lot of memory overhead as well as an allocation and a decompression lag in the main thread.
Apple gives you a nice head start on this functionality, but the don't give you everything. In my current iOS project, I am downloading images from a CMS. Once each image is downloaded, I slice it up accordingly using CGDrawing methods, then I cache them to disk, naming them imagename_row#_column#.png. In all, I saved four levels of detail for each image. This means that each image is resized, sliced up, and cached at four different resolutions (just think of having to do that in photoshop...for every client image). Then in the CATiledLayer implementation, it looks for the appropriate images to load based upon scale and position of the UIScrollView. It works really well and makes those high-res images load quickly.