Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Node stops producing blocks, 'Could not lookup information required to validate the transaction' #167

Open
stakeworks opened this issue Aug 5, 2024 · 1 comment

Comments

@stakeworks
Copy link

Since Aug 04 21:33:00 block #4,522,480 node doesn't produce any blocks, error:

Thread 'tokio-runtime-worker' panicked at 'Could not lookup information required to validate the transaction', /home/runner/.cargo/git/checkouts/polkadot-sdk-cff69157b985ed76/f00a911/substrate/frame/executive/src/lib.rs:578
This is a bug. Please report it at:
        https://github.com/InvArch/InvArch-node/issues/new
2024-08-04 21:47:13 [Parachain] Evicting failed runtime instance error=Runtime panicked: Could not lookup information required to validate the transaction
2024-08-04 21:47:13 [Parachain] Block prepare storage changes error: Error at calling runtime api: Execution failed: Runtime panicked: Could not lookup information required to validate the transaction
2024-08-04 21:47:13 [Parachain] 💔 Error importing block 0xc429c0f3e58b7bc492d99f4e381a856918e1c595442889572bbefd42c23259a5: consensus error: Import failed: Error at calling runtime api: Execution failed: Runtime panicked: Could not lookup information required to validate the transaction
====================
Version: 1.6.2-a39f288c416
   0: sp_panic_handler::set::{{closure}}
   1: std::panicking::rust_panic_with_hook
   2: std::panicking::begin_panic_handler::{{closure}}
   3: std::sys::backtrace::__rust_end_short_backtrace
   4: rust_begin_unwind
   5: core::panicking::panic_fmt
   6: frame_executive::Executive<System,Block,Context,UnsignedValidator,AllPalletsWithSystem,COnRuntimeUpgrade>::execute_extrinsics_with_book_keeping::{{closure}}::panic_cold_display
   7: <alloc::vec::into_iter::IntoIter<T,A> as core::iter::traits::iterator::Iterator>::fold
   8: tracing::span::Span::in_scope
   9: frame_executive::Executive<System,Block,Context,UnsignedValidator,AllPalletsWithSystem,COnRuntimeUpgrade>::execute_block
  10: tinkernet_runtime::api::dispatch
  11: sp_externalities::scope_limited::ext::using
  12: sc_executor::executor::WasmExecutor<H>::with_instance::{{closure}}
  13: sc_executor::wasm_runtime::RuntimeCache::with_instance
  14: <sc_executor::executor::NativeElseWasmExecutor<D> as sp_core::traits::CodeExecutor>::call
  15: sp_state_machine::execution::StateMachine<B,H,Exec>::execute
  16: <sc_service::client::call_executor::LocalCallExecutor<Block,B,E> as sc_client_api::call_executor::CallExecutor<Block>>::contextual_call
  17: <sc_service::client::client::Client<B,E,Block,RA> as sp_api::CallApiAt<Block>>::call_api_at
  18: <tinkernet_runtime::RuntimeApiImpl<__SrApiBlock__,RuntimeApiImplCall> as sp_session::runtime_api::SessionKeys<__SrApiBlock__>>::__runtime_api_internal_call_api_at::{{closure}}
  19: <tinkernet_runtime::RuntimeApiImpl<__SrApiBlock__,RuntimeApiImplCall> as sp_block_builder::BlockBuilder<__SrApiBlock__>>::__runtime_api_internal_call_api_at
  20: sp_api::Core::execute_block
  21: <&sc_service::client::client::Client<B,E,Block,RA> as sc_consensus::block_import::BlockImport<Block>>::import_block::{{closure}}
  22: <alloc::sync::Arc<T> as sc_consensus::block_import::BlockImport<B>>::import_block::{{closure}}
  23: <cumulus_client_consensus_common::ParachainBlockImport<Block,BI,BE> as sc_consensus::block_import::BlockImport<Block>>::import_block::{{closure}}
  24: <alloc::boxed::Box<dyn sc_consensus::block_import::BlockImport<B>+Error = sp_consensus::error::Error+core::marker::Sync+core::marker::Send> as sc_consensus::block_import::BlockImport<B>>::import_block::{{closure}}
  25: futures_util::future::future::FutureExt::poll_unpin
  26: sc_consensus::import_queue::basic_queue::BlockImportWorker<B>::new::{{closure}}
  27: <futures_util::future::future::map::Map<Fut,F> as core::future::future::Future>::poll
  28: <sc_service::task_manager::prometheus_future::PrometheusFuture<T> as core::future::future::Future>::poll
  29: <futures_util::future::select::Select<A,B> as core::future::future::Future>::poll
  30: <tracing_futures::Instrumented<T> as core::future::future::Future>::poll
  31: tokio::runtime::park::CachedParkThread::block_on
  32: tokio::runtime::context::runtime::enter_runtime
  33: tokio::runtime::task::core::Core<T,S>::poll
  34: tokio::runtime::task::harness::Harness<T,S>::poll
  35: tokio::runtime::blocking::pool::Inner::run
  36: std::sys::backtrace::__rust_begin_short_backtrace
  37: core::ops::function::FnOnce::call_once{{vtable.shim}}
  38: std::sys::pal::unix::thread::Thread::new::thread_start
  39: <unknown>
  40: <unknown>

Ubuntu 22.04 LTS
Binary version: v1.7.1 -> displays 1.6.2!
ExecStart=/path/tinkernet-collator
--collator
--chain /path/tinker-raw.json
--base-path /path/.tinkernet-collator
--name "StakeWorks | Tinkernet | HDS05"
--force-authoring
--state-pruning=archive
--no-private-ipv4
--prometheus-port
--prometheus-external
--listen-addr "/ip4/xx.xx.xx.xx/tcp/30303/ws"
--bootnodes
--bootnodes
--bootnodes
--reserved-nodes
--reserved-nodes
--rpc-port
--out-peers 50
--in-peers 75
--
--chain kusama
--base-path /path/.tinkernet-collator
--no-private-ipv4
--sync=fast
--blocks-pruning 1000
--port
--rpc-port

There are a couple of nodes with the same problem:

image

@stakeworks
Copy link
Author

Downgrading to v1.6.2 fixes the issue, but the node still struggles to stay connected to the peers

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant