nacer@hpmcaa.mcm.hp.com (Abdenacer Moussaoui) (08/03/89)
If I understand it well, Sybase provides the following access methods for select statements: 1) when the sql-server is done extracting all the rows that satisfie the query it returns control to the caller. In order to obtain a faster response you may specify a row_size parameter so that as soon as the number of specified rows is extracted the sql-server returns those rows, when you're done processing this batch of rows you use db_next_row() to finish excecution of the select over the server and extract further rows(). **Note** there is not db_previous_row() function call, hence no easy way to browse. 2) You may specify what is called row_buffer_size to obtain some random access over chunks of the selected rows. When you issue the select statement the sql-server fills the buffer on the client machine with x rows (x being the row_buffer_size) you may then access these rows in any order you wish through some db function calls. However in order to obtain further rows from your select you have first to clear *permanently* some rows from the buffer. So unless you know before hand the number of rows the select will return you cannot set the buffer size accordingly to provide browsing over all selected rows. What if the number of matching rows may be very large at times how can one go about choosing an appropriate buffer size? How do db people feel about this? For the ones that are currently using Transact-SQL did any body find a work-around this limitation? Please correct me if I missed anything. Thank you. -- nacer @mist.cs.orst.edu I know Informix has a previous and so does...