how to compute loss for each individual training instance in theano / keras -


i trying tweak keras code generate loss each individual training instance in each training epoch. _fi_loop(.) keras / theano function below generates average loss each batch, happening at

outs = f(ins_batch) 

any hints on how array contains loss values individual instances in each batch , epoch?

thanks in advance.

def _fit_loop(self, f, ins, out_labels=[], batch_size=32,               nb_epoch=100, verbose=1, callbacks=[],               val_f=none, val_ins=none, shuffle=true,               callback_metrics=[]):     '''abstract fit function f(ins).     assume f returns list, labeled out_labels.      # arguments         f: keras function returning list of tensors         ins: list of tensors fed `f`         out_labels: list of strings, display names of             outputs of `f`         batch_size: integer batch size         nb_epoch: number of times iterate on data         verbose: verbosity mode, 0, 1 or 2         callbacks: list of callbacks called during training         val_f: keras function call validation         val_ins: list of tensors fed `val_f`         shuffle: whether shuffle data @ beginning of each epoch         callback_metrics: list of strings, display names of metrics             passed callbacks. should             concatenation of list display names of outputs of              `f` , list of display names of outputs of `f_val`.      # returns         `history` object.     '''     do_validation = false     if val_f , val_ins:         do_validation = true         if verbose:             print('train on %d samples, validate on %d samples' %                   (ins[0].shape[0], val_ins[0].shape[0]))      nb_train_sample = ins[0].shape[0]     index_array = np.arange(nb_train_sample)      self.history = cbks.history()     callbacks = [cbks.baselogger()] + callbacks + [self.history]     if verbose:         callbacks += [cbks.progbarlogger()]     callbacks = cbks.callbacklist(callbacks)      # it's possible callback different model self     # (used sequential models)     if hasattr(self, 'callback_model') , self.callback_model:         callback_model = self.callback_model     else:         callback_model = self      callbacks._set_model(callback_model)     callbacks._set_params({         'batch_size': batch_size,         'nb_epoch': nb_epoch,         'nb_sample': nb_train_sample,         'verbose': verbose,         'do_validation': do_validation,         'metrics': callback_metrics,     })     callbacks.on_train_begin()     callback_model.stop_training = false     self.validation_data = val_ins      epoch in range(nb_epoch):         callbacks.on_epoch_begin(epoch)         if shuffle == 'batch':             index_array = batch_shuffle(index_array, batch_size)         elif shuffle:             np.random.shuffle(index_array)          batches = make_batches(nb_train_sample, batch_size)         epoch_logs = {}         batch_index, (batch_start, batch_end) in enumerate(batches):             batch_ids = index_array[batch_start:batch_end]             try:                 if type(ins[-1]) float:                     # not slice training phase flag                     ins_batch = slice_x(ins[:-1], batch_ids) + [ins[-1]]                 else:                     ins_batch = slice_x(ins, batch_ids)             except typeerror:                 raise exception('typeerror while preparing batch. '                                 'if using hdf5 input data, '                                 'pass shuffle="batch".')             batch_logs = {}             batch_logs['batch'] = batch_index             batch_logs['size'] = len(batch_ids)             callbacks.on_batch_begin(batch_index, batch_logs)             outs = f(ins_batch)             if type(outs) != list:                 outs = [outs]             l, o in zip(out_labels, outs):                 batch_logs[l] = o              callbacks.on_batch_end(batch_index, batch_logs)              if batch_index == len(batches) - 1:  # last batch                 # validation                 if do_validation:                     # replace self._evaluate                     val_outs = self._test_loop(val_f, val_ins,                                                batch_size=batch_size,                                                verbose=0)                     if type(val_outs) != list:                         val_outs = [val_outs]                     # same labels assumed                     l, o in zip(out_labels, val_outs):                         epoch_logs['val_' + l] = o         callbacks.on_epoch_end(epoch, epoch_logs)         if callback_model.stop_training:             break     callbacks.on_train_end()     return self.history 


Comments

Popular posts from this blog

asynchronous - C# WinSCP .NET assembly: How to upload multiple files asynchronously -

aws api gateway - SerializationException in posting new Records via Dynamodb Proxy Service in API -

asp.net - Problems sending emails from forum -