As I understand it, in the eighties the typical way of handling memory was one RAM chip per bit of data bus width. Suppose you were building a 16-bit machine and you wanted to give it 32K of RAM, you could do this with 16kbit RAM chips, using sixteen of them. 128K could equally well be done using 64kbit chips, again sixteen of them.
However, if you wanted 64K of RAM, this could only be done by using eight of the 64kbit chips, and making each chip deliver two bits, one after another, thus incurring a slowdown.
So if you don't want to pay any penalty in access speed, it's 32K or 128K but not in between. Is this correct?
In your hypothetical 16 bit machine with 64kiB of RAM, you could simply implement two 32kiB banks with using sixteen 16kib chips each. This obviously doubles the required number of chips and board space required, which may not be cost-effective against using just next higher-density chips and getting twice as much memory again for free.
At least one real-world example exists. The Amiga 500 shipped with 512kiB of RAM, and early models implement this using sixteen 256kib chips. The A501 memory expansion contains another sixteen 256kib chips, giving 1MiB in total.
There were several variations largely driven by cost at any particular point in time.
What is interesting is WHY 1 bit chips were popular, basically your address bus was typically multiplexed using the RAS and CAS signals, so if the technology at the time made 64K a desirable chip size, you could do 8 Address, RAS,CAS, 1 Data, WR, RD, CE plus power and ground in something like a 16/18 pin DIL and only end up with 1 signal per chip being unique. Back before multilayer PCBs were cheap, this mattered.
Compare with an array of 8, 8k*8 parts (Same total memory size), now you have to run that 8 bit data bus to every chip, plus 7 bits of address, plus the control signals, plus you need an address decoder, so you are looking at a 24 pin chip, with much more routing on the PCB.
Eventually speeds got to the point that the lower bus loading made wider devices a better choice (especially as NMOS had horrible noise margins), but if you look at a modern DIMM you will still find that multiple narrow parts are often favoured.
Find someone with a collection of vintage Computer Shopper issues, there's no better research material for such matters. Not only are there articles discussing the merits of different computers and their memory schemes, there are ads touting the month-by-month pricing, speed and capacity of direct-market processors, RAM chips and disk drives.
A few pointers:
-- Early dynamic RAMs were multivendor, with a common parts numbering scheme (and similar DIP pinouts) through the 4k to 256k generations.
-- 4096 x 1 chips for example were 4104, 16,384 x 1 chips were 4116, 65,536 x 1 chips were 4164, 262,144 x 1 were 41256.
-- A suffixed letter often indicated if it was an epoxy (P for plastic) or ceramic (C for ceramic) package.
-- 4-bit-parallel parts were called "nybble-wide" or "nibble-wide" and were numbered 4416 and 4464. The 4464P was the commonest sort used in Apple //e's from 1986 onwards, providing 64kx4 with 120nsec RAS typical.
--The original type-1 IBM AT used paired 4164C's soldered in piggyback stacks to populate its DIP sockets with "128kbit" RAMs, an arrangement possible because IBM custom-packaged these RAMs at their plants to fit a 256-kbit pinout. Finding these today is an Easter Egg hunt through Grampa's workbench, as they were often removed to make way for 41256 chips and squirreled away in some unmarked DIP rail or organizer drawer.
-- 1-megabit parts were the last to operate on 5-volt supply, but were internally 3.3-volt parts
-- 4-megabit parts began a wholesale transition to new technologies like 3.3-volt logic, JEDEC pinouts, SMT surface mounting and Single Inline Memory Modules. By then most new computers used word-wide (16-bit) memory.
One other reason: 1-bit chips could easily be configured as 8 or 9 bit wide arrays using only one type of chip - the latter in cases where memory parity (error checking) was desired. Sometimes, 4+4+1 was used, but necessitated two chip types with potentially different reliability characteristics (which is not helpful in a parity system).
It was very common in 1980s systems to utilize DRAM chips that output 4 bits. Common variants were 256Kb DRAM chips accessed as 64K x 4 bits, and 1Mb chips accessed as 256K x 4 bits. Thus, a 16-bit data bus might only require 4 DRAM chips to provide 512KB.
The 4-bit wide DRAM's were used widely because they reduced chip count needed for a given data bus width. 8 chips connected to a 32-bit data bus was a common configuration into the early 1990s. Also, later versions of 1980s 8-bitters like the Apple //e Platinum and the Commodore C64c were able to reduce their DRAM footprint to only 2 chips, reducing manufacturing costs.